id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
119099475
pes2o/s2orc
v3-fos-license
Analysis of energy transfer in quantum networks using kinetic network approximations Coherent energy transfer in pigment-protein complexes has been studied by mapping the quantum network to a kinetic network. This gives an analytic way to find parameter values for optimal transfer efficiency. In the case of the Fenna-Matthews-Olson (FMO) complex, the comparison of quantum and kinetic network evolution shows that dephasing-assisted energy transfer is driven by the two-site coherent interaction, and not system-wide coherence. Using the Schur complement, we find a new kinetic network that gives a closer approximation to the quantum network by including all multi-site coherence contributions. Our new network approximation can be expanded as a series with contributions representing different numbers of coherently interacting sites. For both kinetic networks we study the system relaxation time, the time it takes for the excitation to spread throughout the complex. We make mathematically rigorous estimates of the relaxation time when comparing kinetic and quantum network. Numerical simulations comparing the coherent model and the two kinetic network models, confirm our bounds, and show that the relative error of the new kinetic network approximation is several orders of magnitude smaller. Keywords: exciton transfer, quantum efficiency, kinetic networks, FMO, coherent energy transfer, quantum networks, Schur complement. Introduction Since coherent energy transfer in the Fenna-Matthews-Olson complex (FMO) has been observed [6,9,13], extensive experimental and theoretical research has been dedicated to studying coherent resonant transfer [5] and the coherent pigment-protein interaction [12,8]. In particular, numerical solutions of simple models have shown that dephasingthe destruction of the coherences -at an intermediate rate helps to increase the energy transfer efficiency [10,11]. This has been called dephasing-or environment-assisted energy transfer, and is analogous to a critically damped oscillator. The dephasing corresponds to damping and causes the exciton to relax to an equal distribution for every pigment site instead of staying localized due to the energy mismatch between the sites. The models are based on two assumptions. First, only a single exciton is present, it is located at any of the seven pigments. The pigment exciton energy, and the pigment dipole-dipole interaction [4,1] then lead to an oscillatory evolution of the system. And second, the site-environment interactions are assumed to be purely Markovian without any temporal or spatial correlations. The environment interactions are dephasing, recombination and trapping. Dephasing destroys the site coherences without destroying the exciton itself, and phonon recombination or photon re-emission lead to loss of the exciton to the environment. Trapping is the transfer of the exciton to the reaction center, where the electronic energy is converted to chemical energy, in FMO it occurs at pigment 3. The transfer efficiency is the probability that an exciton starting at site 1 or site 6 reaches the reaction center. For a general system with n pigments, we convert the master equation of the coherent model into vector forṁ ρ = M ρ where ρ ∈ R n 2 is the density matrix in vector form and M is a real n 2 × n 2 -matrix. Two procedures to find M are presented in 2.2 and 3.5. To study population transfer channels and conditions for optimal transfer, a mapping to kinetic networks has been proposed [3,7]. A kinetic network is a system where the exciton jumps incoherently between sites according to some fixed rates, i.e. a continuous-time Markov process. In its simplest version this approximation only takes into account the coherent interaction between pairs of sites to derive the transfer rate between them. If the two sites interact with strength V , have an energy separation E, and both sites experience dephasing at rate γ and population loss at rate κ then the rate is This rate is maximized for the intermediate dephasing rate γ = E − κ so the phenomenon of dephasing-assisted transfer is maintained in this approximation. For a system with n sites, these rates constitute the off-diagonals of a n × n rate matrix N 0 , and the system populations evolve according tȯ where p ∈ R n is the time-dependent population vector. Figure 1 displays the transfer efficiency with models M and N 0 for different γ, the dephasing-assisted regime clearly shows as a peak around γ ≈ 170cm −1 . At the peak the population evolution of M is well approximated by that of N 0 , therefore dephasing-assisted energy transfer can be explained by the relatively simple coherent dynamic between pairs of sites that enters the rate µ and the influence of system-wide coherence is small. To extract the limit of good approximation we introduce scaling variables, Γ which is proportional to the energy separations, dephasing and population loss rates, and Θ which is proportional to the site interactions. We will show that the approximation of N 0 to M becomes good as ΘΓ −1 approaches 0. We generalize the procedure of finding a kinetic network approximation in a mathematically appealing way using block matrices. We find a kinetic network matrix N that follows the evolution of M much closer -it is over three orders of magnitude more precise than the network N 0 as shown in Figure 1. Further, it can be expanded in ΘΓ −1 as where N 0 is the approximation described above, and the N k are rate corrections due to coherent interactions via k intermediate sites. The expansion terms become smaller for increasing k, N k ∝ Θ · (ΘΓ −1 ) k . By stopping the expansion at a finite k kinetic networks approximation of varying accuracy can be formed allowing the study of coherent interaction at different "scales" or number of involved sites. We restrict our further investigation to the dominant contribution N 0 and the entire sum N . In our exact bounds we study the system with all population-loss mechanisms removed. Due to dephasing the exciton spreads throughout the system at the exciton relaxation time τ and all populations become equal. The difference ∆τ between relaxation times of M and N or N 0 gives a simple measure of how good the kinetic networks approximate the quantum network. As ΘΓ −1 becomes small the kinetic networks approach the quantum network and ∆τ becomes small as well. We define τ and ∆τ as follows, using the Euclidean norm p 2 = n i=1 p 2 i to compare population vectors. Definition 1. 1. The map T : R n 2 → R n is the restriction of density vectors ρ to population vectors p, and consequently T † gives the embedding of population vector space in density vector space. In particular, if the first n components of ρ represent the site populations, then T = (½ n , 0 n×(n 2 −n) ). The maximum relaxation time is and the corresponding minimal relaxation rate is 3. The maximum deviation of relaxation time between the quantum network M and the kinetic network N is 4. Define τ 0 , µ 0 and ∆τ 0 in the same way, replacing N with N 0 . For our bounds we require that every site experiences dephasing. Further, the network has to be connected, meaning that any two sites can exchange populations -directly or indirectly-such that the relaxed state will have equal population everywhere. And finally we also require our site interactions to be real -but it is clear from our proofs that the generalization to complex interactions could be treated in a similar manner. Our first results shows how fast the relaxation time of the two kinetic networks N 0 and N approximate that of the quantum network M as ΘΓ −1 gets small. Theorem 2. There are scaling invariant constants k 1 and k 2 , such that for ΘΓ −1 small enough we have the following bounds: 1. The relative difference of relaxation time between quantum evolution M and kinetic evolution N 0 is bounded by ∆τ 0, rel = ∆τ 0 /τ 0 ≤ k 1 ΘΓ −1 . 2. The relative difference of relaxation time between quantum evolution M and kinetic evolution N is bounded by This Theorem follows from Theorem 5 and Corollary 7 in Section 5. We also find the following exponential bounds on the time dependence. Theorem 3. There are scaling invariant constants k 3 , k 4 and k 5 , such that for any initial population distribution p 0 we have the following bounds, as long as ΘΓ −1 is small enough: 1. For all times t ≥ 0 T e Mt T † p 0 − e N0t p 0 2 ≤ k 3 e −µ0t/2 · ΘΓ −1 . For all times This Theorem follows from Theorem 8 and Corollary 10 in Section 6. We expect that more sophisticated methods might yield the same bound without the Θ 2 Γ −2 log ΘΓ −1 term. The quantum network We first introduce the Master equation for the coherent model. Then we reformulate the equation in vector form and combine the entire dynamic in the real n 2 × n 2 -matrix M . We describe the general structure of M as a preparation to the next section, where we generate kinetic networks from parts of M . Master equation We consider the same quantum mechanical system studied in [11] with n sites carrying a single excitation which is equivalent to a system with n states/levels. The site energies are E k ∈ R so the energy operator is The site k couples to site l with interaction strength V kl ∈ C so the interaction operator is Site trapping, re-emission and recombination can be incorporated by an anti-hermitian operator A. Let κ k be the combined rate of exciton loss at site k due to these effects, then A is defined as Finally, every site is also under the influence of dephasing at rate γ k ≥ 0 incorporated in the Lindbladian superoperator with L k = √ γ k |i i|. Setting = 1, the single exciton manifold of the quantum network is described by the master equationρ where square and curly brackets represent commutator and anti-commutator respectively. For now we set A = 0, ignoring exciton depleting processes as explained above. We will mention how to include them in the kinetic network approximations later on. Our approximation becomes exact in the limit where the energy difference between sites is large, the dephasing is large and the interactions are small. To be specific, we introduce scaling parameters Γ and Θ and consider the limit ΘΓ −1 → 0. Energies and dephasing scale like Γ and interactions scale like Θ With these assumptions the master equation turns intȯ Because this equation is linear in ρ it can be converted into vector forṁ where ρ ∈ R n 2 is the density matrix in vector form and M is a real n 2 × n 2 -matrix. Two procedures to find M are presented in 2.2 and 3.5. Converting to vector equation We rewrite the master equation (3), skipping the scaling factors Θ and Γ, it is easy to reintroduce them at a later pointρ Our first goal is to convert this into the differential equatioṅ ρ = M ρ for density "vectors" ρ ∈ R n 2 . Notice that because ρ = ρ † the space of density matrix has n 2 real dimensions, so we are not using any information when mapping ρ to ρ. We use the following conversion: 1. The first n entries of the density vector are the populations -the real diagonal entries of ρ. 2. For the entries n + 1 to n 2 we alternate between real and imaginary parts of the coherences -the off-diagonal entries of ρ -starting with entry ρ kl where k = 1 and l = 2 continuing by increasing l until l = n, then moving to the entry ρ 23 . We multiply all these entries with √ 2, a normalization factor useful to achieve simpler expressions later on. In terms of index equations this is: Other mappings will yield the same kinetic networks, as long as they allow for an easy separation of population and coherence space. While somewhat tedious, it is now relatively straightforward to find the matrix M such thaṫ To find the rows k = 1 . . . n we write out the diagonal components of the RHS of (4), and to find rows k = n+1, . . . , n 2 we write out the off-diagonals of the RHS of (4). We follow this procedure explicitly for the case n = 3 in Appendix A. From there it is obvious how the procedure generalizes to larger n. Here we will only present the final form. The coherent evolution matrix M For simple notation and to simply extract the kinetic networks we split up the density vector space R n 2 . Let P = R n be the space of populations and let C = R n 2 −n be the space of coherences. We can then write density vectors as ρ = p c with p ∈ P and c ∈ C. With this splitting the matrix M describing the quantum network looks like where a : P → C and b : C → C are real matrices (so a † = a ⊤ , but we'll keep the more general notation for later). Notice that the populations do not affect each other directly, but only via the coherences. Matrix a describes how populations couple to coherences, its entries are real and imaginary parts of V kl , naturally, site k will only couple to coherences kl with l = k, thus of the (n 2 − n) entries in the k-th column of a only 2(n − 1) are nonzero. Matrix b describes how coherences couple to other coherences, if considered as a block matrix with 2 × 2-blocks the diagonal block for the coherence between site k and site l is of the form where The off-diagonal blocks consist of real and imaginary parts of V kl . From the form of M , when ignoring the off-diagonal blocks of b, we see that the site k couples to the site l via the coupling strength V kl , then some mixture of γ kl and E kl and then again via the coupling strength V kl . This reminds us of the rates of the form µ = 2V 2 γ γ 2 +E 2 described in (1) that make up the matrix N 0 . We will make this intuition precise is the next subsections. Kinetic networks In this section we show how the kinetic network N emerges naturally out of the study of the resolvent (z − M ) −1 . We expand N in powers of ΘΓ −1 , giving the series with the leading order contribution being N 0 . For some steps involving matrix calculations we only give a simplified version. However, in Appendix (A) we follow the procedure described below, giving the full expressions in the case n = 3. Extracting the kinetic network N To extract kinetic networks from M we consider its resolvent (z − M ) −1 . Remember that for any holomorphic function f we have Therefore, if one can bound the resolvent appropriately, one can also bound the evolution operator e Mt and other related quantities. Because we only care about approximating the population dynamics we restrict our view to the population block of the resolvent of M . The Banchiewicz formula [2] gives the inverse of a 2 × 2-block matrix. The first block of the inverse -in our case the population block -is called the Schur complement, and due to its basic nature has many applications in applied mathematics, statistics and physics [14]. Here we use it to "pull" the coherence dynamic back into population space. Only writing the Schur complement and skipping the other blocks of the resolvent we have Remember the operator T , the restriction to population space. With our choice of density vector basis it has the form The difference of evolution for initial conditions ρ 0 = p 0 0 = T † p 0 (zero coherences) between quantum network M and kinetic network N is thus For a good approximation we require At this point it is a small step to drop the second z on the LHS, in which case the formula becomes equality if we set To see intuitively that this approximation is good, consider the following. Matrix b contains terms proportional to Γ on its diagonal and terms proportional to Θ on its off-diagonal, matrix a is proportional to Θ, therefore when ΘΓ −1 becomes small. For values of z that are smaller than eigenvalues of b the approximation (6) , for values of z larger than eigenvalues of b is good, because then z is much larger than the eigenvalues of N , and so both sides of (6) are approximately z −1 . This basic insight is what drives our bounds in Section 8. Expanding N As mentioned in 2.3, b consists of 2 × 2-blocks proportional to Γ on the diagonal and 2 × 2-blocks proportional to Θ on the off-diagonal. We separate this contributions defining This leads to the expansion When using explicit forms of a, b 0 and ν one can see that the rates in N k consist of corrections due to interactions via k intermediates. Roughly speaking, every of the (k + 1) sites along the chain contributes a factor of Θ, every of the k coherences (links) contributes a factor of Γ −1 , thus N k scales like Θ k+1 Γ −k . The network N 0 We now present the explicit form of N 0 = a † b −1 0 a the dominant contribution to N . We only show the crucial parts of the calculations that should make clear how to get the result for general n. Notice that, from 3.2 and (5), it follows that b 0 is a (n 2 − n) × (n 2 − n) matrix with the only nonzero entries along the diagonal. With the unitary transformation we can diagonalize these 2×2 blocks. Hence, the entire matrix b 0 can be diagonalized by applying the transformation with where diag denotes a diagonal matrix with given diagonal entries. In fact, U also helps to simplify a, consider the case n = 3ã and the same happens forν = U † νU (derivation in Appendix A). Notice that bothb 0 andã are complex matrices, still, we can use the transformed matricesã,b 0 , andν when finding explicit expressions for the real matrices N k , because U cancels out. For example In the case n = 3 we get The following simplified calculation illustrates how the rates µ kl result from the matrix multiplicationã †b−1 More generally for any n we have for i = j and This is just the network described in [3] and the introduction. Including re-emission, recombination and trapping The population decreasing effects of re-emission, recombination and trapping can all be described by the rates κ k of the diagonal anti-hermitian operator κ k |k k| included in our general master equation (2). The contribution to the rate of changeρ is easily calculated and M becomes with the new contributions and c 2 = −diag(κ 12 , κ 12 , κ 13 , κ 13 , . . . , κ n−1,n ) with κ kl = 1 2 (κ k + κ l ) the rate that decreases the coherence of sites k and l. With this the networks become which also hold with the replacements a →ã, b →b and ν →ν, while leaving c 1 and c 2 unchanged. The rates in N 0 can again be calculated directly Numerical simulations According to the last two subsections, network N 0 is easy to calculate directly, while network N and any k-site contribution N k can be formed from the general definition ofã,b 0 andν (see Appendix (B)) which can be somewhat tedious. However, there is another approach related to numerical simulations. When running numerical calculations to simulate a complex master equation (2) where M is the superoperator formed by the RHS of the master equation (2). Once the entire real matrix is found it is cut into population and coherence blocks and a generalized kinetic network of the same form as N is calculated as Hence, if one has already calculated M in order to simulate a quantum network, it only takes a few steps to find the kinetic network approximation N . Preliminaries In this section we give some definitions and conditions. The conditions allow us to infer basic facts about the spectra of the operators N 0 , N and M , which are required for all our bounds in Sections (5), (6) and (8). Norm Because for our bounds of the relaxation time we remove all population decreasing effects all evolutions M , N 0 , and N leave the total population invariant. Therefore we split up the space of populations P . Set e = (1, 1, . . . , 1) † /n ∈ P the equal population vector. As we will prove in Proposition 4, as long as the network meets certain conditions, both quantum and kinetic evolutions will tend to e for any initial condition with total population 1. Consequently, we are only interested in the properties of our matrices in the space of population inequalities This is reflected in the norm we use, defines as follows. For A : X 1 → X 2 where X 1 and X 2 are equal to I or C we define the operator norm as where · 2 is the Euclidean norm. Hence, from now on, we think of our matrix blocks as Note that a is the same if we maximize over I or P because a e = 0, and that a = a † . Also, according to Proposition 4 N 0 < 0 on I and therefore N −1 0 is well-defined. The same holds for N . Define to be the eigenvalue closest to 0 in N in I, define µ 0 the same way for N 0 . Conditions For all our following bounds we have a set of conditions. • First, we require that the network is connected, in the sense that any two sites k and l are coupled, at least via some intermediates, i.e. for some integer p ≥ 0 there are sites m j , j = 1 . . . p such that the product is nonzero. This condition ensures that all sites can exchange population and the evolution ultimately converges to e. • Second, we require all the site dephasing rates to be strictly positive,γ k > 0. This condition is essential for our approximation, as the coherences need to decay for the evolution M to become non-oscillatory. Notice that the limit ΘΓ −1 → 0 does not require that the dephasing rates get larger, but they will be much larger than the magnitude of eigenvalues of N or N 0 , the population decay rates, because Γ ≫ Θ 2 Γ −1 . • Finally, we require that the V kl are real. This ensures that N is symmetric and has a real spectrum (see Proposition 4), which allows simpler bounds in our proofs. While N 0 is always symmetric, we first compare the evolutions of M and N , and then the evolutions of N and N 0 . Therefore we require this condition for both N 0 and N . We are confident that our methods would extend to the case of complex V kl , but for the sake of clarity we restrict ourselves to the simpler case. Inverse bounds Or proofs consist mainly of using the following two bounds on the inverse on different parts of resolvents. First, consider the Taylor series of the inverse close to 1, which for real numbers x gives for |x| ≤ 1/2. This is readily translated to a bound for operators and these two bounds follow from the fact Spectral properties The following Proposition gives some basic facts about the spectra of the kinetic networks N and N 0 . We will use these properties for the proofs of our bounds. 6. For ΘΓ −1 small enough, N 0 < −µ/2 and N < −µ 0 /2 on I. Proof. 1. These properties follow directly from the form in (12) and (13). 2. N is real because it is a product of a, b −1 and a † which are also real. 3. If V kl is real thenã (see (10))is real, so N =ã ⊤b−1ã 4. From (10) it is not hard to understand howã looks for any n. One sees that the two rows for the coherence between sites k and l have exactly two non-zero entries, the first has V kl and −V kl , and the second hasV kl and −V kl . Thereforeã e = 0 and so N 0 e = N e = 0. The condition for equality is as follows. Because γ k > 0 we have Hence, because the network is connected, we have v k = v l for all k and l and with k v k = 0 it follows that v = 0, thus N 0 < 0 on I. 6. Because µ 0 = N −1 0 −1 and N 0 < 0 we have N 0 ≤ −µ 0 on I. Note that µ ∝ Θ 2 Γ −1 can grow as ΘΓ −1 gets small, so µ − µ 0 can grow in absolute value, however, as we now show the spectra of N and N 0 approach each other relative to their "size" µ − µ 0 ≪ µ 0 We bound the distance of N and N 0 with the inverse bound. For ΘΓ −1 small enough we have ν ≤ 1 2 b −1 0 −1 and we can apply (15) on So, the distance of N and N 0 is proportional to Θ 2 Γ −2 Θ, and the eigenvalues in N 0 and N -in particular µ 0 and µ -are proportional to Θ 2 Γ −1 . Comparing the two gives That means the eigenvalues are approaching each other relative to their magnitude, in particular N becomes negative definite like N 0 , and |µ − µ 0 | µ 0 → 0 . Bounding relaxation time error We now give an explicit definition of relaxation time and the norms we use to control it. Then we derive bounds first comparing the quantum network M to the kinetic network N , and then comparing the kinetic networks N and N 0 . As a simple check of sanity consider the following. If we scale Γ ∝ s and Θ ∝ s then also M, N ∝ s and time scales inversely ∆τ, τ ∝ s −1 . Therefore the relative error ∆τ rel = ∆τ /τ stays unchanged and we expect bounds in terms of positive powers of ΘΓ −1 . Our two bounds show exactly this behavior. The approximation of N to M is proportional to Θ 2 Γ −2 , while the approximation of N 0 to N is proportional to ΘΓ −1 , combining the two approximations it follows that the approximation of N 0 to M is also proportional ΘΓ −1 . Note that all the results in this Section require the conditions in 4.2. Relaxation time By Proposition (4), the eigenvalues of N and N 0 on I are all negative for ΘΓ −1 small enough, so for any initial distribution p 0 ∈ I e N t p 0 → 0 e N0t p 0 → 0 for large t. We can integrateˆ∞ 0 e N t dt = N −1 and applying the operator norm maximizes the relaxation time for the kinetic network N over all possible population inequalities p 0 ∈ I, set and in the same way we define τ 0 = µ −1 0 for the network N 0 . We define the error in relaxation time as the relaxation time difference maximized over I Hence, bounding ∆τ means controlling the worst possible error in relaxation time when approximating M by N . The relative error is ∆τ rel = ∆τ /τ notice that we compare the worst possible relaxation time error to the longest possible relaxation time, those two do not necessarily occur for the same initial condition. We define ∆τ 0 and ∆τ 0, rel in the same way, comparing N and N 0 . Resolvent difference Converting the operator for the relaxation time error we get where the complex integration follows a contour surrounding both Spec M and Spec N . Define S(z) to be the difference of the two resolvents We now seek a bound on ˆ∞ Comparing the relaxation time of M and N When bounding second order terms with the inverse bound we encounter and κ 0 for the corresponding terms with b 0 instead of b. Notice the scaling behavior µ, µ 0 ∝ Θ 2 Γ −1 and κ, κ 0 ∝ Θ 2 Γ −2 . We will change the contour integration in (18) to be along the imaginary axis z = iy for y ∈ R. We prove the somewhat technical bounds on S(iy) in Lemma 11 in Section 8. where β > 0 is the scaling independent constant from Lemma 11. This gives a bound on the relative error where k 2 is scaling invariant. Proof. We set the integration contour in (18) to be along the complex axis z = iy for y ∈ R with y going from −R to +R. We close the contour to the left in the half plane of negative real parts along a circle of radius R. According to Lemma 11, S(z) has no poles with Re z ≥ 0 and so all poles lie within this contour for R large enough and ΘΓ −1 small enough. As R tends to infinity the integrand behaves like 1 z 3 so the half-circle does not contribute to the integral. We can therefore change to complex integral to an integral in y over all of R Now split up the integral into two regions |y| ≤ µ and |y| ≥ µ and then use the corresponding bounds from Lemma 11. Choose ΘΓ −1 small enough so that µ < α and use part 1 of the Lemma to bound and use part 2 of the Lemma to bound Adding the two bounds gives the result Comparing the relaxation time of N and N 0 Theorem 6. If ΘΓ −1 is small enough then where µ and κ can also be replaced by µ 0 and κ 0 . This gives a bound on the relative error Proof. In this case we don't need to bound the resolvent, instead we can evaluate the integral We use the inverse bound (15) twice. First, because ν ≤ 1 2 b −1 −1 as long as ΘΓ −1 is small enough, we can apply the bound on Now, apply the bound again with A = N and B = N 0 − N . The condition for B is where we used (19) in the second step. The last inequality is again achieved for ΘΓ −1 small enough because the two sides scale like Now it follows that as claimed. By switching the role of b and b 0 we receive the corresponding bound with κ 0 and µ 0 . As a corollary we receive a bound on the relaxation time difference between the fully quantum mechanical evolution of M and the simple kinetic network evolution of N 0 . Bounding evolution error In this chapter we bound the difference of time evolution operators for M , N and N 0 . Our error bounds looks as follows e Mt − e N t ≤ e −µt/2 · X where X is proportional to Θ 2 Γ −2 up to a logarithmic term, and proportional to ΘΓ −1 if N is replaced with N 0 . The logarithmic term appears due to intermediate times. It seems the integral over time performed in the last chapter seems to have conveniently guided us around that logarithm. As for the time dependence, using a shifting integration contour might give a bound like e −µt µt, but a better control of the spectrum would be necessary to shift the contour close to −µ for long times. As in the last chapter in 5.2, we write the evolution difference as a complex integral before we prove bounds Note that all the results in this Section again require the conditions in 4.2. Comparing the evolution of M and N We will change the contour integration in (20) to be parallel to the imaginary axis z = iy − µ/2 for y ∈ R. With this choice the exponential in the integral yields exponential decay at rate µ/2. Again we give the technical bounds on S(iy − µ/2) in Lemma 12 in Section 8. Theorem 8. If ΘΓ −1 is small enough then for all t ≥ 0 we have where k 4 and k 5 are a scaling independent constants. Proof. We set the integration contour in (20) to parallel to the complex axis z = iy − µ/2 for y ∈ R with y going from −R to +R. We close the contour to the left in the half plane of negative real parts along a circle or radius R. According to Lemmas 11 and 12, S(z) is bounded for Re z ≥ −µ/2 and hence has no poles. Therefore all the poles lie within the contour for R large enough and ΘΓ −1 small enough. As R tends to infinity the integrand behaves like 1 z 2 e Re z so the half-circle does not contribute to the integral. We can therefore change to complex integral to an integral in y over all of R Adding the three bounds gives the result The middle term of the parenthesis has the worst scaling behavior while the other two terms scale like Θ 2 Γ −2 . Therefore there are some scaling independent constants k 4 and k 5 such that T e Mt T † − e N t ≤ e −µt/2 · k 4 Θ 2 Γ −2 1 + k 5 ln Θ −1 Γ . 6.2 Comparing the evolution of N and N 0 Theorem 9. If ΘΓ −1 is small enough then for all t ≥ 0 we have where k ′ 4 is scaling independent, and where µ and κ can also be replaced by µ 0 and κ 0 . Proof. We are bounding the integral 1 2πi˛e We use the same contour as in Proposition 8, z = iy − µ/2. According to Proposition 4, all poles ofS(z) lie within this contour when ΘΓ −1 is small enough and R is large enough. Because of the e zt factor and T (z) tending to zero, the integral over the half-circle tends to 0 as R becomes large. We boundS(z) in much the same way that we bounded S(z) in Lemma 12, however, the procedure is more straightforward. Set For any z with Re z = −µ/2 and for ΘΓ −1 small enough we have and so we can apply (15) again (16) and (17) to receive the bounds Adding the two bounds gives where k 6 is scaling independent. The whole proof works just as well when exchanging µ with µ 0 , κ with κ 0 giving a similar bound. As a corollary we receive a bound on the decay time difference between the fully quantum mechanical evolution of M and the simple kinetic network evolution of N 0 . where k 3 is a scaling independent constant. Proof. The bound follows from Theorems 8 and 9 and the fact that Applications The rate of direct population exchange determines the strength of the link between sites k and l for the network N 0 . Because of our condition that γ k > 0, the network topology is fully determined by the V kl , but the strength of the links is also affected by γ k and E k . As applications, we consider two idealized networks. The first is a highly connected network where all sites are linked, the second is a circular chain where where only nearest neighbors are linked. We numerically calculate the relaxation times for the networks M , N 0 and N and compare the relative errors. Then we compare these networks to randomized networks with the same network topology. We also discuss the dimension dependence of our bounds from Sections 5 and again compare it to numerical simulations. All the simulations agree with our bounds, but they show much room for improvement when considering large dimensions. Finally, we discuss the FMO-complex and our model for which some results were already shown in the introduction in Figure 1. For clarity of notation we recall that ∆τ , ∆τ 0 and ∆τ 1 are relaxation time differences between the network pairs M − N , M − N 0 and N − N 0 respectively. This only makes the discussion more precise, while generally ∆τ 0 and ∆τ 1 show the same dimension and scaling behavior, with small corrections to constants. Highly connected network Consider a highly connected network In Figure 2 we made a plot for the computed relative relaxation time differences ∆τ rel and ∆τ 0, rel for different ΘΓ −1 with the initial state localized at site 1. Both axes plot logarithms, hence a straight line with slope n represents a (ΘΓ −1 ) n proportionality. The difference ∆τ rel is too small to show any clear behavior. The difference ∆τ 0, rel is linear with slope approximately 2, hence the approximation is better than the slope 1 expected from Theorem 6. In the same figure we compare our idealized network to random networks where all V kl are chosen randomly between 0 and Θ and all E k are chosen randomly between 0 and Γ, hence they have the same topology. The magnitudes of the errors are similar for the range considered, but the slopes are different. All the samples show an error slope of 1 for ∆τ 0, rel , while the error slope for ∆τ rel is varying, but in most parts steeper than the slope of ∆τ 0, rel . This behavior is closer to the behavior expected from our bounds. Generally, the agreement is about six orders of magnitude better for the network N than the network N 0 . For the ideal highly connected network we derive the quantities used in Theorem 5 and 6 analytically in Appendix C. The resulting bounds are ∆τ rel ≤ c 1 nΘ 2 Γ −2 ∆τ 1, rel ≤ c 2 nΘΓ −1 for dimension and scaling independent constants c 1 and c 2 . The simulation of M has a relatively high error and becomes slow very fast as n gets larger. Hence, we can only get meaningful results for ∆τ 1, rel , the relaxation time difference of networks N and N 0 . The result in Figure 3 actually shows that the difference increases with slope 2 or proportional to n 2 . The reason is that in Theorem 6 we have the condition ν ≤ 1 2 b −1 −1 where the LHS is proportional to n and the RHS is constant (also discussed in the Appendix). If we increase the dimension at constant scaling, this condition and our bound break down. To still get a bound for large n we would need to readjust the scaling. Linear network Assume the sites are positioned on a circle and only nearest neighbors interact with strength Θ where we use the equivalence n ≡ 0. Further γ k = Γ and E k such that E kl = ΓE when |k − l| = 1 which is possible for n even. In Figure 4 we made a plot of the computed relative relaxation time differences ∆τ rel and ∆τ 0, rel for different ΘΓ −1 with the initial state localized at site 1. Interestingly the quality of approximation by N 0 is improved over the highly connected model, while the quality of approximation by N has decreased. Also, both models show the same slope of about 2. We compare the ideal chain to random chains for which the V kl that equal Θ in the idealized case are instead chosen randomly between 0 and Θ, and all E k are chosen randomly between 0 and Γ. We get essentially the same behavior with all slopes being 2. That hints at a possible improvement of our bound in Theorem 6 in the case where the network is a chain, improving the proportionality from ΘΓ −1 to Θ 2 Γ −2 . Generally, the agreement is about five orders of magnitude better for the network N than the network N 0 . As in the last section, we can derive the necessary quantities for our bounds and get ∆τ rel ≤ c 3 Θ 2 Γ −2 n 2 ∆τ 1, rel ≤ c 4 ΘΓ −1 n 2 for dimension and scaling independent constants c 3 and c 4 . This time the condition ν ≤ 1 2 b −1 −1 does not break down and the bounds hold for large dimensions as well. The n 2 terms are due to the lowest eigenvalue of N 0 being proportional to n −2 . This is a weakness of our strategy to use the operator norm for our bounds. Better bounds should be possible when only considering localized exciton as initial state. This initial state would the a superposition of all the eigenstates on N 0 , and the average relaxation time would enter the bounds, instead of the longest relaxation time (the smallest eigenvalue of N 0 ). As above we skip the simulation of M because the error is too large, and consider ∆τ 1, rel only. The result in Figure 3 shows that the difference seems to approximate a constant value for larger dimensions. So, both our bounds could be improved for large dimensions. The FMO-complex The FMO-complex is pigment-protein with trimer structure. Each monomer contains seven bacteriochlorophyll a pigments that capture and transport light. The excitons start out at site 1 or 6 and the trapping occurs at site 3 [1], we set the initial state to be p 0 = (1/2, 0, 0, 0, 0, 1/2, 0) † . We use the same numerical values as [11], with interactions and energies from [4]. The system Hamiltonian is with all the numbers in cm −1 (or 2.9978 · 10 10 s −1 ). Exciton recombination at rate κ = 1ns −1 and reaction center trapping at rate κ 3 = 1ps −1 enter the anti-hermitian operator We use the same dephasing rate for every site γ k = γ, and vary γ from 10 −3 to 10 5 cm −1 . Efficiency is calculated as we calculated f for the three models in Figure 1. Peak efficiency is reached for γ ≈ 170cm −1 close to the average energy gap along the chain which is 146cm −1 . The approximation N has less than 1% error, even for the lowest γ used, and the approximation N 0 gets below 1% error for γ ≈ 2cm −1 . Comparing this to our bounds we have a = a † = 215cm −1 and for large γ b −1 −1 = γ . The numerical factor β is changing because of the changing ratio between energies and dephasing, for large γ however it is approximately equal to 100. Hence, our bound becomes The 1% error margin is reached only when γ = 21500cm −1 , so our numerical factors could certainly be much improved. But this is not unexpected, since our main goal was to find the leading behavior in ΘΓ −1 . We give N 0 for maximal transfer efficiency It is interesting that the rate between sites 2 and 3 is actually smaller than the rate between sites 2 and 6 even though |V 23 | > |V 26 |. The reason is the large energy gap between sites 2 and 3 of 420cm −1 while sites 2 and 6 have an energy gap of 60cm −1 . However, the values for site energies are still up to some debate [1,4], and small changes can easily turn this behavior to the opposite again. Resolvent difference bounds The following three Lemmas are the main technical parts of our bounds. They all consist of bounding the operator norm of the resolvent difference for different values of z. Conceptually the bounding procedure is simple, we only employ the inverse bounds introduced in 4.3. Loosely speaking, if |z| < Γ we can expand (b − z) −1 and then the two terms in S(z) only have a small difference in the denominator, so, using another inverse bound, they almost cancel. If |z| > Γ then |z| ≫ a † b −1 a and we can directly use the second step from the case |z| < Γ. Of course we also have to keep in mind where the poles of S(z) are. According to Proposition 4 (z − N ) −1 has poles on the real axis below −µ which move according to the scaling Θ 2 Γ −1 . On the other hand (z − a † (b − z) −1 a) −1 has poles close to the poles of (z − N ) −1 that approximately cancel each other, but it also has poles close to the eigenvalues of b which are approximately α ij = −γ ij + iE ij andᾱ ij , scaling like Γ. Comparing the two sets of poles, the b-poles are much further to the left (negative real values) than the N -poles because Γ ≫ Θ 2 Γ −1 . Our lemma steer clear of this poles by keeping Re z ≥ −µ/2. Lemma 11 contains bounds for Re z ≥ 0 which on the one hand ensures there are no poles on the right side of the complex plane, and on the other hand we use the bounds for z = iy to bound the relaxation time. Lemma 12 contains bounds for the region −µ/2 ≤ Re z ≤ 0 the bounds are derived in a similar fashion as in Lemma 11, but there are some additional complications. Proof. 1. Assume Re z ≥ 0 and |z| ≤ α ∝ Γ, where Bounds in the right half plane and because |z| ≤ 1 2 b −1 −1 we can use (15) and have To use (15) on this expression notice that and therefore where (16) was applied in the last step, using the fact that a † b −1 a is self-adjoint from Proposition 4. This is just the condition for the bound again using (16) and also (17) we get the bounds for |z| ≤ α. The first bound is bound 1 of the Lemma, the second bound will be used below. 2. We now derive a bound when |z| ≥ α and Re z ≥ 0, we will combine it with (24) to receive bound 2 for all z ∈ R. If ΘΓ −1 is small enough then we have Where the latter inequality uses the fact that the spectrum of b approaches the spectrum of b 0 as ΘΓ −1 becomes small, and the spectrum of b 0 , which is −γ ij ± iE ij , has negative real part −γ ij < 0. The last two inequalities are the conditions to use (15) and get the two bounds the closest any eigenvalue of b gets to the imaginary axis. 2. We now derive a bound when |y| ≥α. If ΘΓ −1 is small enough then The last two inequalities are the conditions to use (15) and get the two bounds Use b min from (25), giving for |y| >α, which is the bound in part 2 of our Lemma. Conclusion We studied to kinetic networks that approximate the energy transfer in a quantum network subject to dephasing. The first network N 0 derives its rates only from nearest neighbor interactions, while the second N includes higher order corrections. We proved that the relaxation times are proportional to ΘΓ −1 and Θ 2 Γ −2 respectively. Hence, the approximations are good if the interaction gets weak, or the dephasing and/or energy gaps get large. In the case of the FMO complex, both kinetic networks are good approximations in the regime of dephasing-assisted energy transfer. With simulations we found that the more complex kinetic network N provides approximations with a percentage error 5-6 magnitudes smaller than the simple kinetic network. The study of these approximations could be extended in several ways. First, one could study the higher order corrections involved in N . Second, when the interactions V kl are complex, N can be non-symmetric, meaning population exchange between sites is directed, this might relate to coherent cancellations along loops as mentioned in [3]. And finally, it would be interesting how our method of splitting population and coherence space to achieve kinetic network approximations could be generalized to other quantum networks and how it relates to existing models to approximate coherent evolution with incoherent statistical evolution. Acknowledgments I want to thank Chris King for his support, ideas and many useful discussions. A Three sites In the following we write out parts of the master equation (3) for the case n = 3 and then derive the form of the matrix M . Then we explain how to generalize that form to higher n. For simplicity of notation we omit the scaling factors Γ and Θ, until we reach a block matrix expression. First note that with a standard calculation one finds L(ρ) to decrease the coherences in the manner (L(ρ)) kl = −γ kl ρ kl where k = l and γ kl = 1 2 (γ k + γ l ) and (L(ρ)) kk = 0. This gives a diagonal contribution −γ kl in the diagonal of the two rows corresponding to the real and imaginary part of ρ kl . Now, we evaluate the commutator From the 1x1 entry we getρ where superscripts r and i are shortcuts for real and imaginary parts, and from the 1x2 entry we geṫ Written as a block matrix one can see the explicit form of the matrices a, and b. Remember that we also separated b into two parts. We set the 2x2-block diagonal that scales like Γ (the E ij and γ ij entries) to be b 0 and we set the block-off-diagonal that scales like Θ (all the V ij entries) to be ν. So b = b 0 + ν. In 3.3 in (8) we defined a transformation U to diagonalize b 0 , if we extend this transformation to the entire space P ⊕ C asÛ = ½ n ⊕ U we can apply it to M directly and get which also holds with all the tildes removed. It is straightforward to generalize the matricesã andb 0 to n > 3. Matrixã connects the population of site k to the coherences between site k and any other site l with strength V kl , and matrixb 0 is a diagonal matrix with entries α ij and α ij . A bit more complicated is the matrixν it is described in the next subsection. B General construction Here we give a description of how to findã,b 0 andν for general n. We number the n dimensions of population space P with k where k = 1, 2, . . . n and the (n 2 − n) dimensions of coherence space C with kl and kl where k < l are numbers from 1 to n. According to the order defined in 2.2 the first few dimensions of C are called 12, 12, 13, ..., 23, 23, 24, etc. . B.1 Constructingã andb 0 Matrixã is an n × (n 2 − n) complex matrix, with the only nonzero entries B.2 Constructingν The matrixν = U † νU for any n is a somewhat complicated pattern of entries V kl , signs and complex conjugates. It connects coherences between sites k and l with coherences between sites k and m with the strength V lm . Entries ofν are only non-zero if one number of the two double indices match with further conditions on their conjugation. Table 1 shows the rules for the nonzero entries. C.1 Highly connected network Assume all sites are equally interacting, and have the same energies and dephasing rates Then every column in a has 2(n − 1) non-zero entries all equal to Θ. A simple calculation shows that a † a = 2nΘ 2 ½ n − n e e † so for any v ∈ I we have a † a v = 2Θ 2 n v, hence a = √ 2nΘ . Obviously To get the bound on ∆τ 1, rel we also estimate ν , we use the fact that each column and row of ν has (n − 2) nonzero entries and so ν ≥ vnΘ for a scaling and dimension independent constant v. Then Theorem 6 gives the bound ∆τ 1, rel ≤ 4vnΘΓ −1 . The condition for this bound is the LHS is bounded from below by vnΘ and the RHS is constant, so the condition does not hold for large n. C.2 Circular chain Assume the sites are positioned on a circle and only nearest neighbors interact with strength Θ V kl = Θ |k − l| = 1 0 else where we set equivalence n ≡ 0. Further γ k = Γ and E k such that E kl = ΓE when |k − l| = 1 which is possible for n even. Now, the column for site k in a has only 4 entries, two each for the coherences with k − 1 and k + 1. We calculate So a † a = 8Θ 2 and a = √ 8Θ, in particular there is no n dependency. Also b −1 0 = 1/ √ Γ 2 + Γ 2 E 2 and so κ = 8 1+E 2 Θ 2 Γ −2 . We have Moving the numbers into constants k 1 and k 2 , and dropping the 1 in 1 + β (fine for large n), we have We again estimate ν , now each column and row of ν has 2 or 4 nonzero entries and so for some scaling and dimension independent constants v 1 and v 2 . Then Theorem 6 gives the bound ∆τ 1, rel ≤ 4 π 2 v 2 n 2 ΘΓ −1 . This time the condition does not break down for large dimensions, so the bound holds for all n when Θ and Γ are kept constant.
2012-01-27T12:28:11.000Z
2012-01-27T00:00:00.000
{ "year": 2012, "sha1": "ee65000362564fcdaaf89fd85bd3dbd92c1de20b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ee65000362564fcdaaf89fd85bd3dbd92c1de20b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
252820157
pes2o/s2orc
v3-fos-license
Dynamic matrices with DNA-encoded viscoelasticity for advanced cell and organoid culture 3D cell and organoid cultures, which allow in vitro studies of organogenesis and carcinogenesis, rely on the mechanical support of viscoelastic matrices. However, commonly used matrix materials lack rational design and control over key cell-instructive properties. Herein, we report a class of fully synthetic hydrogels based on novel DNA libraries that self-assemble with ultra-high molecular weight polymers, forming a dynamic DNA-crosslinked matrix (DyNAtrix). DyNAtrix enables, for the first time, computationally predictable, systematic, and independent control over critical viscoelasticity parameters by merely changing DNA sequence information without affecting the compositional features of the system. This approach enables: (1) thermodynamic and kinetic control over network formation; (2) adjustable heat-activation for the homogeneous embedding of mammalian cells; and (3) dynamic tuning of stress relaxation times over several orders of magnitude, recapitulating the mechanical characteristics of living tissues. DyNAtrix is self-healing, printable, exhibits high stability, cyto-and hemocompatibility, and controllable degradation. DyNAtrix-based 3D cultures of human mesenchymal stromal cells, pluripotent stem cells, canine kidney cysts, and human placental organoids exhibit high viability (on par or superior to reference matrices), proliferation, and morphogenesis over several days to weeks. DyNAtrix thus represents a programmable and versatile precision matrix, paving the way for advanced approaches to biomechanics, biophysics, and tissue engineering. Introduction Supramolecular hydrogels are water-swollen three-dimensional (3D) networks of molecules, linked together by non-covalent bonds 1,2 . The dynamic nature of these bonds gives access to unique properties, such as stimuli-responsiveness and self-healing 3,4 . Hydrogels that contain self-assembling DNA are particularly interesting, as the sequence-selective recognition of complementary DNA strands enables modular construction and de-construction of complex functional materials [5][6][7] . With the help of DNA nanotechnology, the properties of these systems can be adjusted and dynamically changed in situ 8,9 . DNAbased materials can thus be adapted to respond to biomolecules and cells to sense [10][11][12][13] , actuate 14,15 , and release 16,17 . Amongst their many applications, DNA-based hydrogels could support 3D cell cultures, for instance, to mechanistically study cell and developmental biology, recapitulate pathologies, and develop (personalized) therapies [17][18][19][20] . We herein report a novel approach to a programmable dynamic DNA-crosslinked matrix platform (DyNAtrix). DyNAtrix is based on a diverse set of DNA modules that self-assemble with a biocompatible DNA-grafted ultra-high molecular weight (UHMW) poly(acrylamide-co-acrylic acid) backbone. By using novel crosslinker libraries-rather than single crosslinker splints-the formation of the supramolecular network and its mechanics can be uniquely controlled without changing crosslinker concentrations or other chemical components of the system. As a result, gelation occurs at very low DNA concentrations, where crosslink efficiencies approach the theoretical limit for affine molecular networks. Importantly, the DNA libraries provide dynamic control over viscoelasticity and plasticity, tunable crosslinking kinetics and mixing behavior, as well as adjustable degradability. These features demonstrate the unique programmability of soft materials powered by DNA nanotechnology as 3D cell culture matrices and printable bio-inks. Material concept and design To create a programmable yet inexpensive and cytocompatible hydrogel, we first synthesized three derivatives of a UHMW poly(acrylamide-co-acrylic acid)-graft-DNA 38,39 copolymer ( Figure 1a). We chose this backbone due to its high molecular weight, anticipated biocompatibility, and methanol responsiveness. The latter enables efficient removal of unreacted DNA and cytotoxic acrylamide monomers. We hypothesized that the UHMW backbone could serve as the majority structural component, Table S2) and an estimated average number of 3, 20, and 28 DNA strands per backbone, respectively (Supplementary Note 3.2). The covalently attached DNA strands serve as universal anchor sites for self-assembly with DNA crosslinkers. Moreover, they allow modular noncovalent extension of the material with other functional DNA constructs. A single polymer batch can thus be used to assemble materials with vastly diverse properties. For studies involving cell culture, additional synthetic peptide side chains were included in the synthesis. The peptides contain the arginylglycylaspartic acid (RGD) motif that facilitates cell adhesion. The two synthesized peptide-grafted polymer derivatives (termed P 5 RGD and P 10 RGD ) had similar yield and molecular weight as their peptide-free counterparts (Supplementary Tables S2, S3). To our surprise, initial attempts to crosslink the polymers at low concentration with simple crosslinker splints failed to produce a stable hydrogel. As the mechanical strength of a hydrogel is proportional to the number of effective crosslinks (Supplementary Note 3.3), we hypothesized that polymer self-binding (i.e., ineffective intra-molecular bonds) dominated over network-forming (i.e. inter-molecular) crosslinks ( Figure 1b). The predominance of undesirable intramolecular bonding is not specific to DNA gels, but a general phenomenon in crosslinked polymer gels, typically reducing crosslink efficiency to 20% or less 40,41 . The sequence-selectivity of DNA hybridization offered a unique solution to this problem: instead of using a single splint type, we constructed complex libraries based on a dual splint design (Figure 1c,d). All library members have an identical adapter domain that binds to the anchor strands below a specific melting temperature, T1. Additionally, each splint has an overlap domain that is designed to pair with only one other splint below a second melting temperature, T2. The melting temperature is predictable with high accuracy based on nearest-neighbor thermodynamic models 42 . We designed anchor-and overlap domains to have sufficiently separated melting temperatures, ensuring that upon slow cooling from 95°C anchor domains would bind prior to overlap domains (Supplementary Figure S4). In principle, overlap domains can be designed with perfectly orthogonal sequences. However, this would require costly synthesis of many different oligonucleotides. Instead, we chose a combinatorial approach where overlap domains are diversified by introducing ambiguous bases (N) at specific positions ( Figure 1d). N indicates a sequence position at which an equal mixture of A, T, C, and G nucleotides is used during oligonucleotide synthesis. The complexity (i.e., the number of distinct splint pairs) of this combinatorial crosslinker library (CCL) equals 4 n , where n is the number of ambiguous bases in the sequence. As the DNA duplex is strongly destabilized even by a single base mismatch 43 CCL complexity uniquely controls network formation and matrix stiffness, allowing gelation at exceptionally low DNA content According to statistical simulations, the suppression of undesired intra-molecular crosslinks strongly depends on the complexity of the CCL, as well as the number of anchor strands per polymer backbone ( Figure 2a). To suppress at least 80% of intramolecular crosslinks, P1, P5, and P10 were predicted to require approximately 4, 40, and 60 splint pairs, respectively. We first studied the mechanical properties of CCL- Adjusting CCL complexity represents a novel approach to tune the elasticity of a gel without changing the crosslinker or polymer concentration. Increasing the CCL complexity from CCL-1 to CCL-64 (at constant overall DNA concentration) improved crosslinking efficiency from approximately 28% to 76% (Figure 2c, Supplementary Note 3.1), which is in good qualitative agreement with the prediction (Figure 2a). The CCL complexity thus controlled G' values in the range of 50 to 140 Pa, which is comparable to Matrigel, the most widely used cultural matrix for organoids. We note that our statistical simulation (Figure 2a) only predicts the upper limit for intramolecular bonds, but it does not account for kinetic or certain topological effects. 44 For instance, increasing library complexity from 64 to 256 splint pairs did not further increase stiffness. We suspect that the marginal benefit of using CCL-256 is likely offset by its slower binding kinetics, since overlap domains of polymer-bound crosslinker strands become increasingly unlikely to find and capture their zero-mismatch binding partners in very large libraries. We also note that CCLs are expected to improve crosslinking efficiency only when the solid content of the gel is low. This is because at high concentrations and under equilibrium conditions intermolecular bonds can outcompete intramolecular loops. The latter can also mechanically interlock, thereby contributing to the network's elasticity 44 . Rapid self-healing allows printing of complex patterns under cytocompatible conditions Extrusion bioprinters require self-healing hydrogels that can be easily liquefied under mechanical forces and rapidly recover their viscoelasticity. Such properties allow the material to pass through the nozzle as a fluid, protect the cells from damaging forces, and quickly reform the network after extrusion. To explore the compatibility of DyNAtrix for extrusion bioprinting, we first tested its self-healing properties by oscillatory rheology (Figure 3b, Supplementary Figure S9). When a large strain (1000%) is applied, the supramolecular network breaks apart and transitions into a liquid state, as revealed by a drastic drop of its stiffness (G' << G") and increase of the phase angle from 3°±1° to 74°±1°. When reducing the strain, the gel state is quickly restored. After being subjected to 10 consecutive breakage cycles, the storage modulus recovered to over 95% of its original value, revealing that breakage occurs predominantly at the reversible DNA crosslinks rather than the covalent polymer backbone. Due to this rapid self-healing, Nanomechanical crosslinker stability encodes macroscopic stress-relaxation behavior As demonstrated by Mooney, Chaudhuri and colleagues, matrix plasticity influences cell development. 28,51 Previously described approaches to alter plasticity require significant changes to the chemical composition or temperature-parameters that can influence cell development even without stress- Table S1). As the overlap domains are the weakest connection points in the supramolecular network, their rupture force was predicted to dictate the material's stress relaxation behavior. Indeed, SRCs with overlap domains ranging between 6 to 18 nucleotides allowed the adjustment of the stress-relaxation time (τ) from less than 1 s to more than 1000 s. Importantly, the stress relaxation time should not influence the plateau stiffness of the material, as long as the lifetime of the overlap duplex is long compared to the timescale of the experiment. Frequency-dependent rheological measurements confirm that-despite the vastly different stress relaxation times-all SCR-crosslinked gels using 8-18nt overlap domains exhibit similar plateau stiffnesses (Supplementary Figure S7). Due to its sub-second relaxation time, gels crosslinked by 6-nt SRCs are only solid at high deformation frequencies. Overall, SRCs allow precise mimicking of the typical stress relaxation times that are found across diverse animal tissues 28 (Figure 3f). Heat-activated crosslinkers enable homogeneous encapsulation of mammalian cells The previous sections described how CCLs and SRCs permit predictable engineering of network formation, thermodynamic stability, and (nano)mechanical properties. We next exploited the tunable binding kinetics of DNA in order to make CCL-crosslinked DyNAtrix compatible with existing cell encapsulation workflows. In commercially available cell culture matrices, cells are typically first dispersed within a liquid precursor solution. Subsequently, gelation is induced to encapsulate the cells. For instance, Matrigel precursor solution is typically mixed with cells at 4°C, followed by incubation at ~37°C. The heating step conveniently triggers gelation at the ideal temperature for mammalian cell culture. In contrast, DNAcrosslinked hydrogels typically require heating above the crosslinker melting temperature to become liquid (>50°C), which is incompatible with cell culture. As an alternative to thermal annealing, we first attempted to encapsulate cells by rapidly mixing two pre-annealed polymer precursor solutions ( Figure 4b, left). However, the fast hybridization kinetics of DNA leads to crosslink formation before a homogeneous mixture can be achieved. The resulting supramolecular networks were highly inhomogeneous (Figure 4d, left) and exhibited poor mechanical stiffness (Figure 4c, green trace). As a solution, we aimed to mimic the convenient thermoresponsive profile of Matrigel by altering the DNA crosslinking kinetics. We designed heat-activated crosslinkers (HACs) by upgrading the CCL design with blocking strands (Figure 4a, Supplementary Note 3.4). The blocking strands serve as protection groups that bind to the overlap domain at low temperatures to prevent their premature crosslinking. The blocking strands were designed to spontaneously dissociate from the overlap domains at 37°C, thereby activating the crosslinker for binding. A mixture of two polymer precursor solutions is thus trapped in an offequilibrium (metastable) liquid state at 4°C, such that the mixing with cells and medium can proceed to completion. In agreement with this design, heating to 37°C leads to rapid formation of a homogeneous gel (Figure 4d, right) with highly superior mechanical stiffness (Figure 4c, blue trace). Note that in absence of blocking strands the material was relatively stiff at 4°C and drastically softened upon heating to 37°C. This finding suggests that during initial mixing numerous base-mismatched crosslinks are rapidly formed, yet they are labile and dissociate upon heating, leaving behind a network with few effective crosslinks Matrix stability in cell culture, cytocompatibility, and tunable degradation An immediate challenge for applying DNA-based materials in cell culture relates to their stability, as DNA is readily degraded by nucleases. DNase I, in particular, is present in fetal bovine serum (FBS) that is commonly added to culture media. It is thus necessary to protect DNA crosslinkers from DNase I activity, yet, on the flip side, a certain degree of enzymatic reconfigurability can be desirable. We therefore sought to identify conditions under which the enzymatic digestion of DNA-based gels can be either promoted or inhibited. First, we tested the stability of unprotected gels in the presence of culture medium that was supplemented with 10% FBS. Gels were largely degraded within 48 hours (Supplementary Video 2). We then tested the effect of actin, which is a natural inhibitor of DNase I 53 . Using a fluorescently labeled DNA mock target, we showed that the rate of digestion by DNase I can be gradually tuned with different actin concentrations (Figure 5a). A concentration of 50 µg/mL was sufficient to suppress gel degradation for at least 48 hours (Figure 5b, Supplementary Video 3). 80 µg/mL actin reduced the rate of digestion by 97.5% Innate immune response and hemocompatibility To further evaluate the properties of DyNAtrix as a biomaterial, we incubated HAC-64-crosslinked P 5 and P 5 RGD (1%, w/v) with fresh human whole blood and tested for various markers of inflammation and hemostasis. We compared these two gels to three reference substrates: (I) reactively cleaned glass, (II) Teflon™ AF (a benchmark material for medical devices), and (III) covalently crosslinked polyacrylamide (AA) gel. DyNAtrix exhibited low activation of monocytes ( Figure 5f) and granulocytes (Supplementary Figure S11b,c), as indicated by low levels of CD11b expression. Similarly, we measured low levels of C5a, which is a marker for the activation of the complement system (Supplementary Figure S11a). Thus, DyNAtrix did not induce a significant innate immune response in vitro. Moreover, both gels showed very low release of platelet factor 4 (PF4, Figure 5f) and prothrombin fragment 1+2 (F1+2, Supplementary Figure S11d). The results indicate reduced blood coagulation and platelet activation with respect to glass and Teflon™ AF, and similar to the AA gel reference. These results suggest potential suitability for in vivo studies and medical applications. No statistically significant differences were observed between gels constructed from P 5 and P 5 RGD . DyNAtrix supports proliferation and guides morphogenesis of diverse cell types and organoids We finally sought to validate the suitability of DyNAtrix for advanced cell culture and organoid research. Figure S13c). To demonstrate sustained proliferation, we adapted a recently reported protocol for long-term placenta organoid culture 54 . DyNAtrix crosslinkers were degraded with DNAse I on day 7 and the released organoids were successfully passaged two consecutive times, resulting in a total culture duration of 21 days (Supplementary Figures S23-24). The organoid morphology in DyNAtrix and Matrigel was investigated in immunofluorescence images obtained from confocal microscopy (Figure 6c,g, Supplementary Figure S19). We found strong expression of GATA3, a trophoblast-specific transcription factor typically found in all cells of placenta organoids (Figure 6c). Overall, confocal imaging did not reveal any morphological or developmental differences of placenta organoids in DyNAtrix when compared to Matrigel. To the best of our knowledge, this is the first report describing the successful growth of human placenta organoids in a fully synthetic 3D matrix environment. Beyond the scope of this article, future studies will focus on utilizing the independently tunable stressrelaxation, elasticity, and adhesion signals in DyNAtrix to improve the physiological relevance of this organoid model for in-vivo placental development. Figure S15. hurdles in the field of regenerative medicine. DyNAtrix exhibits high cyto-and hemocompatibility, is selfhealing, and suitable for injection and 3D extrusion printing. All of its components can be easily sterilized by filtration or alcohol treatment and stored (ready-to-use) for years. However, surprisingly few studies had explored their use in 3D cell and tissue culture 50,58-60 . Problems relating to high required DNA content, premature gel degradation, and seamless integration with existing cell culture workflows had remained largely unsolved. In the present study we show that the combination of a UHMW polymer backbone with CCLs enables very low DNA concentrations-up to 2 orders of magnitude lower than previously reported synthetic DNA-crosslinked gels. This approach gives access to soft gels with elastic moduli in the range of 1-500 Pa, which are suitable for the study of many mechano-sensitive stem cell and organoid systems 61 . The lower end of the stiffness regime is particularly relevant for systems that benefit from ultra-soft environments, such as placenta organoids (as shown in this study) and stem cell models for embryo development. Due to the low DNA content, DyNAtrix is nonimmunogenic, making it potentially suitable for medical applications, such as injectable cell-laden gels, drug release systems, or coatings for medical devices. The low DNA content also enables scalable highthroughput applications at relatively low material cost (Supplementary Table S4). Though DyNAtrix supported all tested cell types in this study, we note that it currently carries only a single type of adhesion ligands. In the future, it may become necessary to attach multiple ligands to the backbone to better mimic the complex adhesion properties and signaling of the native ECM. The construction of biocompatible hydrogels that combine dynamic viscoelastic properties with long-term stability in cell culture has been a major challenge 21 wrote the initial manuscript draft. All authors discussed the results and helped revise the manuscript. Data availability Supplementary information containing materials and methods, supplementary figures, tables, datasets, and accession numbers for biological materials are available with this paper. Additional datasets and materials generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Code availability A python script for the statistical simulation of the maximum percentage of intramolecular crosslinks as a function of CCL complexity is available as Supplementary File 11. A python script for the thermodynamic calculations of CCL interactions is available as Supplementary File 12. Ethics statement Informed consent was obtained from all recipients and/or donors of cells or tissues. The study involving human whole blood was covered by the ethic vote EK-BR-24/18-1 of the Sächsische Landesärztekammer. The blood was obtained from two voluntary ABO-matched donors who had not used any medicine in the past ten days. The study involving Human bone marrow-derived MSCs was covered by the ethic vote EK221102004 and EK47022007 at TU Dresden. MSCs were isolated from healthy female/male donors
2022-10-12T13:26:58.709Z
2023-03-22T00:00:00.000
{ "year": 2023, "sha1": "2b706c9909ea1c41e54126a7b1837801bad1a2ad", "oa_license": "CCBYNC", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/10/08/2022.10.08.510936.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "872148a5913315ec424d296155d4a0282508a738", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
20195300
pes2o/s2orc
v3-fos-license
Common pitfalls in statistical analysis: Measures of agreement Agreement between measurements refers to the degree of concordance between two (or more) sets of measurements. Statistical methods to test agreement are used to assess inter-rater variability or to decide whether one technique for measuring a variable can substitute another. In this article, we look at statistical measures of agreement for different types of data and discuss the differences between these and those for assessing correlation. INTRODUCTION Often, one is interested in knowing whether measurements made by two (sometimes more than two) different observers or by two different techniques produce similar results. This is referred to as agreement or concordance or reproducibility between measurements. Such analysis looks at pairs of measurements, either both categorical or both numeric, with each pair having been made on one individual (or a pathology slide, or an X-ray). Superficially, these data may appear to be amenable to analysis using methods used for 2 × 2 tables (if the variable is categorical) or correlation (if numeric), which we have discussed previously in this series. [1,2] However, a closer look would show that this is not true. In those methods, the two measurements on each individual relate to different variables (e.g., exposure and outcome, or height and weight, etc), whereas in the "agreement" studies, the two measurements relate to the same variable (e.g., chest radiographs rated by two radiologists or hemoglobin measured by two methods). WHAT IS AGREEMENT? Let us consider the case of two examiners A and B evaluating answer sheets of 20 students in a class and marking each of them as "pass" or "fail," with each examiner passing half the students. Table 1 shows three different situations that may happen. In situation 1 in this table, eight students receive a "pass" grade from both the examiners, eight receive a "fail" grade from both the examiners, and four receive pass grade from one examiner but "fail" grade from the other (two passed by A and the other two by B). Thus, the two examiners' results agree for 16/20 students (agreement = 16/20 = 0.80, disagreement = 4/20 = 0.20). This seems quite good. However, this fails to take into account that some of the Perspectives in Clinical Research | Volume 8 | Issue 4 | October-December 2017 grades may have been guesswork and that the agreement may have occurred just by chance. Let us now consider a hypothetical situation where examiners do exactly this, i.e., assign grades by tossing a coin; heads = pass, tails = fail [ Table 1, Situation 2]. In that case, one would expect 25% (=0.50 × 0.50) of students to receive pass grade from both and another 25% to receive "fail" grade from both -an overall "expected" agreement rate for "pass" or "fail" of 50% (=0.25 + 0.25 = 0.50). Hence, the observed agreement rate (80% in situation 1) needs to be interpreted keeping in mind that 50% agreement was expected purely by chance. These examiners could have bettered this by 50% (best possible agreement minus the agreement expected by chance = 100%−50% =50%), but achieved only 30% (observed agreement minus the agreement expected by chance = 80%−50% =30%). Thus, their real performance in being concordant is 30%/50% = 60%. Of course, they could theoretically have performed worse than what was expected by chance. For instance, in situation 3 [ Table 1], even though each of them passed 50% of students, their grades agreed for only 4 of the 20 students -far fewer than that expected even by chance! It is important to note that, in each of the three situations in Table 1, the pass percentages for the two examiners are equal, and if the two examiners are compared using a usual 2 × 2 test for paired data (McNemar's test), one would find no difference between their performances; by contrast, the inter-observer agreement in the three situations is widely different. The basic concept to be understood here is that "agreement" quantifies the concordance between the two examiners for each of the "pairs" of scores and not the similarity of the overall pass percentage between the examiners. METHODS USED TO MEASURE AGREEMENT The statistical methods used to assess agreement vary depending on the type of variable being studied and the number of observers between whom agreement is sought to be assessed. These are summarized in Table 2 and discussed below. Two observers assessing the same binary outcome (Cohen's kappa) Cohen's kappa (k) calculates inter-observer agreement taking into account the expected agreement by chance as follows: Cohen's k can also be used when the same rater evaluates the same patients at two time points (say 2 weeks apart) or, in the example above, grades the same answer sheets again after 2 weeks. Its limitations are: (i) it does not take into account the magnitude of differences, making it unsuitable for ordinal data, (ii) it cannot be used if there are more than two raters, and (iii) it does not differentiate between agreement for positive and negative findings -which may be important in clinical situations (e.g., wrongly diagnosing a disease versus wrongly excluding it may have different consequences). Weighted kappa For ordinal data, where there are more than two categories, it is useful to know if the ratings by different raters varied by a small degree or by a large amount. For example, microbiologists may rate bacterial growth on culture plates as: none, occasional, moderate, or confluent. Here, ratings of a particular plate by two reviewers as "occasional" and "moderate," respectively, would imply a lower level of discordance than if these ratings were "no growth" and "confluent," respectively. The weighted Kappa statistic takes this difference into account. It thus yields a higher value when the raters' responses correspond more closely, with the maximum scores for perfect agreement; conversely, a larger difference in two ratings provides a lower value of weighted kappa. Techniques for assigning weightage to the difference between categories (linear, quadratic) can vary. Fleiss' kappa This method is used when ratings by more than two observers are available for either binary or ordinal data. ASSESSING AGREEMENT BETWEEN MEASUREMENTS OF CONTINUOUS VARIABLES Two methods are available for assessing agreement between measurements of a continuous variable across observers, instruments, time points, etc. One of these, namely intra-class correlation coefficient (ICC), provides a single measure of the extent of agreement, and the other, namely Bland-Altman plot, in addition, provides a quantitative estimate of how closely the values from two measurements lie. Intra-class correlation coefficient Let us think of two ophthalmologists measuring intraocular pressure using a tonometer. Each patient will thus have two readings -one by each observer. ICC provides an estimate of overall concordance between these readings. It is somewhat akin to "analysis of variance" in that it looks at the between-pair variances expressed as a proportion of the total variance of the observations (i.e., the total variability in "2n" observations, which would be expected to be the sum of within-and between-pair variances). The ICC can take a value from 0 to 1, with 0 indicating no agreement and 1 indicating perfect agreement. Bland-Altman plots When two instruments or techniques are used to measure the same variable on a continuous scale, the Bland-Altman plots can be used to estimate agreement. This plot is a scatter plot of the difference between the two measurements (Y-axis) against the average of the two measurements (X-axis). Thus, it provides a graphical display of bias (mean difference between the two observers or techniques) with 95% limits of agreement. The latter are given by the formula: L i m i t s o f a g r e e m e n t = m e a n o b s e r ve d difference ± 1.96 × standard deviation of observed differences. Consider a situation where we wish to assess the agreement between hemoglobin measurements (in g/dL) using a bedside hemoglobinometer and the formal photometric laboratory technique in ten persons [ Table 3]. The Bland-Altman plot for these data shows the difference between the two methods for each person [ Figure 1]. The mean difference between the values is 1.07 g/dL (with standard deviation of 0.36 g/dL), and the 95% limits of agreement are 0.35-1.79. What this implies is that hemoglobin level of a particular person measured by photometry could vary from that measured by the bedside method from as little as 0.35 g/dL higher to as much as 1.79 g/dL higher (this is the case for 95% of individuals; for 5% of individuals, variations could be outside these limits). This obviously means that the two techniques cannot be used as substitutes for one another. Importantly, there is no uniform criterion for what constitutes acceptable limits of agreement; this is a clinical decision and depends on the variable being measured. Correlation versus agreement As alluded to above, correlation is not synonymous with agreement. Correlation refers to the presence of a relationship between two different variables, whereas agreement looks at the concordance between two measurements of one variable. Two sets of observations, which are highly correlated, may have poor agreement; however, if the two sets of values agree, they will surely be highly correlated. For instance, in the hemoglobin example, even though the agreement is poor, the correlation coefficient between values from the two methods is high [ Figure 2]; (r = 0.98). The other way to look at it is that, though the individual dots are not fairly close to the dotted line (least square line; [2] indicating good correlation), these are quite far from the solid black line, which represents the line of perfect agreement ( Figure 2: the solid black line). In case of good agreement, the dots would be expected to fall on or near this (the solid black) line. Use of paired tests to assess agreement For all the three situations shown in Table 1, the use of McNemar's test (meant for comparing paired categorical data) would show no difference. However, this cannot be interpreted as an evidence of agreement. The McNemar's test compares overall proportions; therefore, any situation where the overall proportion of pass/fail by the two examiners is similar (e.g., situations 1, 2, and 3 in Table 1) would result in a lack of difference. Similarly, the paired t-test compares mean difference between two observations in a group. It can therefore be nonsignificant if the average Table 3. The upper and lower limits of agreement are generally drawn at 1.96 (roughly 2) standard deviations (of observed inter-observer differences) above and below the line representing the mean difference (solid line); these dotted lines are expected to enclose 95% of the observed inter-observer differences Table 3 and difference between the paired values is small, even though the differences between two observers for individuals are large. SUGGESTED READING The readers are referred to the following papers that feature measures of agreement: 1. Qureshi et al. compared the grade of prostatic adenocarcinoma as assessed by seven pathologists using a standard system (Gleason's score). [3] Concordance between each pathologist and the original report and between pairs of pathologists was determined using Cohen's kappa. It is a useful example. However, we feel that, Gleason's score being an ordinal variable, weighted kappa might have been a more appropriate choice 2. Carlsson et al. looked at inter-and intra-observer variability in Hand Eczema Extent Score in patients with hand eczema. [4] Inter-and intra-observer reliability was assessed using the ICC 3. Kalantri et al. looked at the accuracy and reliability of pallor as a tool for detecting anemia. [5] They concluded that "Clinical assessment of pallor can rule out and modestly rule in severe anemia." However, the inter-observer agreement for detecting pallor was very poor (kappa values = 0.07 for conjunctival pallor and 0.20 for tongue pallor) which means that pallor is an unreliable sign for diagnosing anemia. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-04-03T02:01:44.220Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "3de754ac9aa322049b0fdfee2e1deb008cebea9c", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/picr.picr_123_17", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c71efe6885ca685cf58851fd3416436671c2cf10", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Medicine" ] }
246017113
pes2o/s2orc
v3-fos-license
The Independence of Distinguishability and the Dimension of the System The are substantial studies on distinguishabilities, especially local distinguishability, of quantum states. It is shown that a necessary condition of a local distinguishable state set is the total Schmidt rank not larger than the system dimension. However, if we view states in a larger system, the restriction will be invalid. Hence, a nature problem is that can indistinguishable states become distinguishable by viewing them in a larger system without employing extra resources. In this paper, we consider this problem for (perfect or unambiguous) LOCC$_{1}$, PPT and SEP distinguishabilities. We demonstrate that if a set of states is indistinguishable in $\otimes _{k=1}^{K} C^{d _{k}}$, then it is indistinguishable even being viewed in $\otimes _{k=1}^{K} C^{d _{k}+h _{k}}$, where $K, d _{k}\geqslant2, h _{k}\geqslant0$ are integers. This shows that such distinguishabilities are properties of states themselves and independent of the dimension of quantum system. Our result gives the maximal numbers of LOCC$_{1}$ distinguishable states and can be employed to construct a LOCC indistinguishable product basis in general systems. Our result is suitable for general states in general systems. For further discussions, we define the local-global indistinguishable property and present a conjecture. Introduction In quantum information theory, the distinguishability of states is of central importance. If general POVMs are allowed, states can be distinguished if and only if they are orthogonal [1]. However, in realistic tasks, multipartite states are often shared by separated owners, who can not employ general POVMs. Fortunately, technologies of classical communications have been welldeveloped and can be employed easily. In spirit of this, distinguishing states by local operators and classical communications (LOCC) becomes available and significant. On the other hand, since the distinguishability via LOCC POVMs implies the distinguishability via SEP POVMs, which implies the distinguishability via PPT POVMs, while PPT and SEP POVMs have more simple properties than LOCC ones, they are also considerable. There are substantial researches on distinguishabilities, especially LOCC distinguishability. It has been shown that two orthogonal pure states are LOCC distinguishable [1]. An innocent intuition might be that the more entanglement a set has, the harder it can be distinguished by LOCC, which, however, is not true in general. Although entanglement indeed gives bounds to the LOCC distinguishability [3,4], a LOCC indistinguishable set of nine orthogonal product states in C 3 ⊗ C 3 exists [2], which of course has no entanglements. The local distinguishability of maximally entangled states might be the most-researched one. In C 3 ⊗ C 3 , three orthogonal maximally entangled states are LOCC distinguishable [5] while there exist three LOCC 1 indistinguishable orthogonal maximally entangled states in C d ⊗ C d , for d 4 be even or d = 3k + 2 [6]. The result was weakened but extended to all d 4, namely there are 4 LOCC 1 indistinguishable orthogonal maximally entangled states in C d ⊗ C d when d 4 [7]. More results were given for Bell states and generalized Bell states. In C 2 ⊗ C 2 , three Bell states are LOCC indistinguishable [8], while in C d ⊗ C d with d 3, three generalized Bell states are LOCC distinguishable [9]. A result shows that if d is a prime and l(l − 1) 2d, then l generalized Bell states are LOCC distinguishable [10]. Note that d + 1 maximally entangled states in C d ⊗C d are LOCC indistinguishable [4]. Strangely, a set of generalized Bell states is LOCC distinguishable by two copies [11]. On the other hand, LOCC distinguishability of orthogonal product states is also interesting. It has been shown that an unextendible product basis is LOCC indistinguishable [12] while the LOCC distinguishabllity of a completed product basis is equivalent to its LPCC distinguishability [19]. Constructing LOCC indistinguishable product states is also studied [13,14,15,16,17]. We mention that in C 3 ⊗ C 2 , there are four LOCC 1 indistinguishable orthogonal product states when Alice goes firstly, while in C 3 ⊗ C 3 , there are five LOCC 1 indistinguishable orthogonal product states no matter who goes firstly [18]. Other methods include [20,21,22,23]. Instead of considering states in a fixed system as most previous researches, we consider the nature problem that whether a set of indistinguishable states can become distinguishable by viewing them in a larger system without employing other resources. It is considerable for at least three reasons. Firstly, the local distinguishability of states is bounded by the dimension of the system. In a bipartite system, a necessary condition of a local distinguishable set is the total Schmidt rank not larger than the system dimension [4], which however, can be removed by viewing states in a larger system. Hence, whether the states remain indistinguishable is still suspectable. Secondly, by employing extra resources such as entanglement, a local indistinguishable set may become distinguishable [24,25,26,27,28], while, an universal resource might only exist in a larger system [27]. This gives a feeling that the local indistinguishability of states might depend on the system. Finally, the distinguishability of points in a Hilbert space could be described by distances. For example, setting the distance of two different points be 1, and otherwise be 0. However, the distance of points may depend on the chosen space. For instance, the usual distance of two diagonal points of a unit square is 2 in the 1-dimensional space consisting of the edges of the square, while it is √ 2 in a 2-dimensional space consisting of the plane of the square. Hence, it is worth suspecting that the distinguishability of states may depend on the chosen space. In this paper, we demonstrate that LOCC 1 , unambiguous LOCC, PPT and SEP distinguishabilities are properties of states themselves, namely independent of the system dimension, by proving that an indistinguishable set in ⊗ K k=1 C d k remains indistinguishable via POVMs of the same kind even being viewed in ⊗ K k=1 C d k +h k . Our result solves the problem of searching the maximal number of local distinguishable states once for all and provides a LOCC indistinguishable product basis in general systems. Note that the result is suitable for general states in general systems and both perfect and unambiguous discriminations. Setting In mathematics, a quantum system shared by K owners, namely partite A (s) , s = 1, 2, ..., K, can be described as a Hilbert space ⊗ K k=1 C d k , while a general state can be described as a density operator (positive semi-definited with unit trace). A state set S is LOCC(LOCC 1 , PPT, SEP) distinguishable means that there is a LOCC(LOCC 1 , PPT, SEP) POVM { M j } j=1,2,...,J such that for any j, T r(M j ρ) = 0 for at most one state ρ in S. A LOCC r POVM is described as follow. Partite A (s) , s = 1, 2, ..., K, provide local measurements, depending on the previously published results, on their partita and publish the results, in order. A round is that every member measures and publishes the result once. A POVM { M j } j=1,2,...,J generated by such a procedure after r rounds is defined to be a LOCC r POVM. For example, a LOCC 1 POVM is given as follow. on his partita and gets an outcome on his partita depending on the classical communications of others measured before him, and gets an outcome j1,j2,...,j (s−1) ,js and J j1,j2,...,j (s−1) = J for s = 1. On the other hand, { M j } j=1,2,...,J is said to be a PPT POVM if every M j is positive semi-definited after a partial transposition, while it is said to be a SEP POVM if every M j can be written as a tensor product of local operators. There is a nature embedding from C d to C d+h , viewing a state in Precisely, a computational basis in ⊗ K k=1 C d k can be extended to a computational basis in ⊗ K k=1 C d k +h k , by which all operators are written in matrix form. For a density matrix(state) ρ in ⊗ K k=1 C d k , it can be viewed as the density matrix(state) ρ = ρ 0 0 0 in ⊗ K k=1 C d k +h k . These views will be employed in the rest of the paper. The following corollaries, providing maximal number of distinguishable states, generalize results in special systems to general systems and thus somehow show the abilities of the theorem. For general pure states, since there exist three LOCC 1 indistinguishable orthogonal states in C 2 ⊗C 2 , for example, three orthogonal Bell states, Theorem 1 together with the result in [1] imply that: Corollary 1 In any non-trivial system, the maximal number T such that any T orthogonal pure states are LOCC 1 distinguishable is 2. For orthogonal product states, four LOCC 1 indistinguishable states were constructed in C 3 ⊗ C 2 , providing fixed measurement order, while five LOCC 1 indistinguishable states were constructed in C 3 ⊗ C 3 , providing choice measurement order. Results in bipartite systems also show that three orthogonal product states are LOCC 1 distinguishable in any order while four orthogonal product states are LOCC 1 distinguishable in suitable order [18]. Therefore, as a consequence of Theorem 1, we have: Corollary 2 In C m ⊗ C n , the maximal number P such that any P orthogonal product states are LOCC 1 distinguishable in fixed measurement order is 3, where m 3, n 2, while, the maximal number Q such that any Q orthogonal product states are LOCC 1 distinguishable in suitable order is 4, where m, n 3. For orthogonal product basis, a LOCC indistinguishable basis in C 3 ⊗ C 3 was constructed [2]. Assisted with the result in [19], by using Theorem 1, a LOCC indistinguishable orthogonal product basis in C m ⊗ C n can be constructed, where m, n 3. The above basis is LOCC indistinguishable, not only LOCC 1 indistinguishable. Details are provided as follow. We need a lemma which is an easy corollary of the result in [19]. Lemma 1 An orthogonal product basis in a multipartite system is LOCC distinguishable if and only if it is LOCC 1 distinguishable. By Theorem 1, the above orthogonal basis is LOCC 1 indistinguishable, since the Domino states are LOCC (and thus LOCC 1 ) indistinguishable in C 3 ⊗ C 3 . As we are considering an orthogonal product basis, the above lemma shows that it is LOCC indistinguishable. The construction can be generalized to multipartite systems, by the tensor product of above basis after normalizing and a normalized orthogonal basis of other partite. Proof of the result We only prove the theorem for perfect discriminations. The proofs for unambiguous discriminations are similar. We will show that the up-left block of a POVM in ⊗ K k=1 C d k +h k is a POVM in ⊗ K k=1 C d k of the same kind for LOCC 1 , PPT, SEP or global POVM, while, the condition of distinguishability in a lower dimensional system is the same as in the larger dimensional system, since the low-right block has trace 0. Lemma 2 Let { M j } j=1,2,...,J be a (LOCC 1 , PPT, SEP, global) POVM of Proof: For a POVM { M j } j=1,2,...,J in ⊗ K k=1 C d k +h k , M j can be written as { M j } j=1,2,...,J is a P P T POVM means that for every j, M j is positive semidefinited after a partial transposition. Without loss generality, for j, assume that the partial transposition is on A (1) . ji1 )] is positive semidefinited, which implies that M j1 is a P P T operator. { M j } j=1,2,...,J is a SEP POVM means that M j can be written as . Discussion The result of other indistinguishabilities may also hold. However, the method in this paper may not work. For example, the up-left block of a LOCC(LPCC) POVM in ⊗ K k=1 C d k +h k may not be a LOCC(LPCC) POVM in The reason may relate to that measuring a state in a larger dimensional system, the collapsing state may not in the ordinary lower dimensional system. We construct a projective POVM in C 4 ⊗ C 4 , which is not projective when only looking at the up-left 3 × 3 block as follow. However, it will not be surprising if the result can be extended to other cases. Note that for a completed or unextendible product basis, the result holds for LOCC distinguishability [19,36]. Hence, we have the following definitions and conjecture: This gives a framework of local-global indistinguishability, which states the independence of state distinguishability and the dimension of the system. Now, Theorem 1 can be restated as: For M=LOCC 1 (unambiguous LOCC, (perfect or unambiguous) PPT, SEP, global) distinguishability, the conjecture is true. Conclusion In this paper, we consider the nature problem that whether the indistinguishability of states depends on the dimension of the system. We demonstrate that LOCC 1 , PPT and SEP indistinguishabilities, both perfect and unambiguous, are properties of states themselves and independent of the dimensional choice. More exactly, we show that if states are LOCC 1 (or unambiguous LOCC, (perfect or unambiguous) PPT, SEP) indistinguishable in a lower dimensional system, then they are LOCC 1 (or unambiguous LOCC, (perfect or unambiguous) PPT, SEP) indistinguishable in a dimensional extended system. The result is true for both bipartite and multipartite systems and for both pure and mixed states. Assisted with previous results, Theorem 1 gives the maximal numbers of local distinguishable states and can be employed to construct a LOCC indistinguishable orthogonal product basis in general systems, except for one or two small dimensional ones. Note that the corollaries are even suitable for multipartite systems. For further discussions, we define the local-global indistinguishable property and present a conjecture. Both proving the validity and searching counterexamples for it could be interesting. Acknowledgments We wish to acknowledge professor Zhu-Jun Zheng who gave advices and helped to check the paper. Funding No funding. Conflicts of interest/Competing interests The author declares there are no conflicts of interest.
2020-10-08T02:05:56.770Z
2020-10-07T00:00:00.000
{ "year": 2020, "sha1": "83f752f3893395e01b2e406c32a732514ac87677", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2010.03120", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1d25c7f4205281d9b47fe7449ebcf2f9b493cda9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55532681
pes2o/s2orc
v3-fos-license
Dosimetric comparison between VMAT and RC 3 D techniques : case of prostate treatment Considered as the second men cancer in Algeria, prostate cancer is treated in 70% by radiation. That’s why radiation therapy is therapeutic weapon for prostate cancer. Conformational Radiotherapy in 3D is the most common technique [1−5]. The use of conventionally optimized treatment plans was compared at case scenario of optimized treatment plans VMAT for prostate cancer. The evaluation of the two optimizations strategies focused on the resulting plans ability to retain dose objectives under the influence of patient set up. Dose Volume Histogram in the Planning Target Volume and dose in the Organs At Risks were used to calculate the conformity index, and evaluation ratio of irradiated volume which represent the main tool of comparison [6,7]. The situation was analysed systematically. The 14% dose increase in the target leads to a decrease in the dose in adjacent organs with 39% in the bladder. Therefore, the criterion for better efficacy and less toxicity reveal that VMAT is the best choice. Introduction Cancer is a term used when abnormal cells divide without control and are able to invade other tissues.In this case, cells can spread to other parts of the body through the blood and lymphatic systems.It is a major topic worldwide.Considered to be responsible for high percentage deaths in Algeria, prostate cancer is the biggest people killer aged between 60-80 years.The treatment of cancer is summarized in three main categories: drugs, surgery and radiation-therapy.The choice of the treatment depends on the tumor type, location, stage of the disease and general state of patient.Most of time, the treatment is combined.Its ultimate goal is to eradicate the disease, without damaging the surrounding healthy tissues.In practice, the medical decision is always a trade-off and compromise between Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) [3, 5, 8−10]. At present, radiation therapy (RT) is used in about 70% of all cancer treatments in industrialized countries.It allows killing tumor cells, by deposing a high dose of radiation within the tumor.In North Africa and exactly in Algeria the first private Anti Cancer Center "Athena Medical Center", treating with conventional techniques, was opened in 2013.The evolution of photon delivery techniques using conventional linear accelerator equipped with multi-leaf collimator as RC3D Conformational or VMAT [11−15].The main task of a medical physicist is to make sure that the right dose is delivered in the right place.Treatment planning systems (TPS) was used to plan radio therapy treatments, but with limited dose accuracy in some particular cases.On the other hand, the OAR must be conserved and receive the minimum of dose.In prostate cancer, the better ballistic of VMAT over RC3D allows decreasing the dose to surrounding tissues and achieves through 76 Gy in PTV. Material and method Treatment planning (TP) is the function that translates the physician's prescription and intent into a treatment plan.It contains the set of equipment parameters that control the dose deposition in patient expected to be equivalent to the treatment plan predicted dose.This definition often implies many requirements often above and beyond those implied for photon TP.For the optimization, the physician's prescription is a set of quantified statements (i.e., maximum dose to target), suitable for numeric manipulation in an optimization algorithm, and generalized intent (i.e., minimize dose to the OAR).The quality of a particular objective is limited by the prescription that fixes the boundaries of what remains possible in a trade-off consideration of competing objectives [11,16,17]. TESNAT 2017 In this context, two patients with curator treatment were the subject of this retrospective study.They were selected for stage and volume.The treatment parameters for a chosen patient are listed in Table 1.The two patients have undergone 3DCT simulation using immobilization device (headrest, footrest, foot wedge medical solutions) with General Electric Optima Radiation Therapy CT scanner.Scans covered the entire pelvis volume and upper abdomen part.Then, CT data imaging were generated using the treatment planning system Eclipse (soma version 11.3 by Varian Medical System, California, USA).Gross tumor volume (GTV) was contoured by the treating radiation oncologist from CT datasets.For each prostate cancer case two treatment plans were generated accounting for treatment uncertainties in two different ways: VMAT plans were developed using the Eclipse (Version 11.3 Varian Medical System, California, USA) Treatment Planning System (TPS) with 6 and 18 MV for each patient.AAA (Analytical Anisotropic Algorithm, Varian Medical System, California, USA) was used to compute the dose distributions.Inverse pelvis volume and upper abdomen part.Then, CT data imaging were generated using the treatment planning system Eclipse (soma version 11.3 by Varian Medical System, California, USA).Gross tumor volume (GTV) was contoured by the treating radiation oncologist from CT datasets.For each prostate cancer case two treatment plans were generated accounting for treatment uncertainties in two different ways: VMAT plans were developed using the Eclipse (Version 11.3 Varian Medical System, California, USA) TPS with 6 and 18 MV for each patient.AAA (Analytical Anisotropic Algorithm, Varian Medical System, California, USA) was used to compute the dose distributions.Inverse treatment plans for VMAT were generated using the same dose-volume constraints for all plans.The dose constraints was set for the rectal wall, rectum, bladder, bowel, femoral heads and unspecified normal structure. Dose Volume Histograms (DVHs) were used to compare treatment plans including PTV, OAR, between two treatment techniques.Then, doses distributions were evaluated and in particular points of DVHs [12] by conformity index (CI) defined by tumoral volume covered by reference Isodose, (CI, ideal value = 1), V IR and V T denote respectively volume of reference Isodose, and Tumoral Volume.Dose differences between VMAT and RC3D plans in different structures of OAR (rectum wall, rectum, bladder, bowel, and femoral heads) are illustrated in Table 1.This observational study shows DVHs Curves of prostatic irradiation at 76Gy, for VMAT and RC3D plans.The mean DVH are obtained in OARs, which are mentioned above.The DVHs curves obtained by VMAT are underneath those obtained by RC3D.For the other structures; bladder and soft tissues, the curves cross each other in several times.We can observe that the mean dose in different OARs is significantly reduced in VMAT plans.The values of this reduction attend their maximum for the bladder with more than 40% of reduction in the mean dose.In the other hand, we can see in Table 2 the difference in term of CI between VMAT and RC3D plans.The values of CI for the RC3D plans are less than 0.8 otherwise VMAT plan values are upper than 0.9. Results and discussion Dose values (maxima and mean) of particular PTV 46 Gy, PTV 60 Gy, PTV 76 Gy and OARs are exposed, and we can observe that the mean dose in rectum decreases by about 30% with VMAT technique.In volume covered by dose of 60 Gy (V60) and 70 Gy (V70), it decreases respectively by 20%, 14%.For bladder, it decreases by 10% and 18% respectively.For femoral heads, VMAT dose decrease in V30 from 20% to 9%.Table 2 shows VMAT technique conformity index values, varied from 0.93 to 1.0 and are all close to the ideal value otherwise for RC3D technique. To help the analysis, we affected some statistical tests, and we calculated the VMAT improvement ratio of conformation in PTVs. the results are represented in Fig. 3.The PTV 46 Gy volume is ameliorated by factor of 50% due to volume containing all OARs.For PTV 60 Gy, the improvement is more than 30%, this volume is smaller than PTV 46 Gy it contains rectum and bladder. For PTV 76 Gy, the value of this improvement is more than 14% and consists prostate only. Fig 1 . Fig 1. shows DVHs curves of dose distribution in prostatic irradiation with 76Gy.Dose distribution illustrated by DVHs curves for patient treated for prostate cancer: (Right) VMAT plan, (left) RC3D plan.The PTV 76Gy is represented in red, in Dark brown: PTV 46Gy, in Blue: left femoral head, in Cyan: Right Femoral head, in Green: Bowel, in Brown: Rectum, Orange: Bladder, Pink: Prostate. Fig. 2 . Fig. 2. Graphical representation of Δ dose means and max between the VMAT and RC3D plans Fig. 3 . Fig. 3. Reduction of the 95% isodose volume in the VMAT plans compared to the RC3D plans. Table 1 . Difference in the delivered mean dose and maximum dose between the RC3D plan and the VMAT plan for PTVs and OARs Table 2 . Comparison between Conformity indexes of VMAT and 3DRC plans
2018-12-08T02:17:27.692Z
2017-05-10T00:00:00.000
{ "year": 2017, "sha1": "99284d3b4270600815477b4e634f81d380dfa385", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2017/23/epjconf_tesnat2017_01013.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "99284d3b4270600815477b4e634f81d380dfa385", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
216654816
pes2o/s2orc
v3-fos-license
Screening of Aspergillus and Candida Species with Utmost Potential to Synthesize Citric Acid Background: Citric acid production through fermentation is economical but meeting its increasing global demand has been challenging in recent times. Aim: This study aimed to screen Aspergillus and Candida sp. isolated from different sources with potentials of producing citric acid. Methodology: Aspergillus and Candida spp. were isolated from compost soil and fruits (cucumber and banana) and their morphological characteristics were described using standard microbiological methods. The isolates were quantitatively screened for citric acid production based on appearance of yellow zone of clearance for 3 days. All the isolates which had acid unitage (AU) values> 5.0 were selected for further characterization using molecular methods. Results: Candida tropicalis, Aspergillus sp. A. niger and Penicillium sp. were isolated from the soil and fruit samples. The isolates screened for citric acid production displayed varying diameters of yellow zones around their colonies is indicative of varying capability of the microbial strains. A. niger from compost soil which had highest AU value of 8.5 at Day 3 demonstrated greatest potential to yield citric acid. Molecular characterization revealed high citric acid producing strains as Aspergillus niger (EU440768.1) and Aspergillus welwitschiae (MG669181.1). Original Research Article Uzah et al.; JAMB, 20(4): 10-18, 2020; Article no.JAMB.55563 11 Conclusion: Although Aspergillus niger is widely utilized for industrial production of citric acid, this study has demonstrated that A. welwitschiae is a specie of Aspergillus capable of synthesizing citric acid reasonably. INTRODUCTION In recent times, the demand for citric acid has been on the increase. Yearly, worldwide demand for citric acid is about 6, 000000 tons. In 2004, global production of citric acid was estimated to be 1.4 million tonnes which later increased to 1.6 million tonnes in 2007 [1,2]. It is projected that growth in annual demand for citric acid globally is 3.5-4% [3]. Citric acid is a weak organic acid, solid at room temperature, has a melting point of 153 o C and molecular weight of 210.14 g/mol. This weak organic acid occur naturally in all citrus fruits. At pH 3.1, 4.7 and 6.4, citric acid has three different pKa values. The demand for citric acid globally is high due to its low toxicity when compared with other acidulants which has useful applications mainly in pharmaceutical and food industries [2,4,5]. This weak organic acid has several applications. They include pharmaceutically active substances, pharmaceuticals, personal care and cosmetic products, food, flavouring agent, diuretic, blood anticoagulant, environmental remediation and beverage. Emerging uses of citric acid involves manufacturing of household detergents, dishwashing cleaners, disinfectants etc [6]. Citric acid is Generally Recognized as Safe (GRAS). This status was as a result of approval given to the product by Joint FAO/WHO Expert Committee on Food Additives [7]. In the past, citric acid was naturally obtained from orange, lemon and lime. However, quantity of citric acid produced through this means is grossly insufficient to meet global demand for the product. To achieve production of citric acid in commercial quantity, the use of microorganisms which involves fermentation is preferable than chemical methods. This is because, chemical method is not economically competitive compared with fermentation method [2]. Some species of fungi such as Aspergillus sp., Acremonium sp., Botrytis sp., Eupenicillium sp., Mucor sp. Penicillium sp. and Trichoderma sp. could be utilized for citric acid production. Similarly, some species of yeast mainly of the genera Candida sp. as well as bacteria such as Bacillus licheniformis, Arthro-bacter paraffinens, Corynebacterium sp. and Bacillus subtillis have also been used for citric acid production [8,9]. Among the microorganisms which have the capability to produce citric acid, Aspergillus niger and Candida tropicalis is preferable because they are capable of synthesizing considerable amount of citric acid with minimal formation of undesirable toxic by-products [10]. A. niger can utilize varieties of cheap materials and result in high yield of citric acid due to its well-developed enzymatic system [11]. Almost all industrial processes use A. niger as the producing organism for citric acid production due to its simplicity of handling, rapid proliferation, tolerance to acidic pH and high yield [12]. This fungus is GRAS except in rare occasions when humans develop hypersensitivity reactions following exposure to the spore dust. United States Food and Drug Administration have considered many Aspergillus niger enzymes to be GRAS [10]. However, Candida tropicalis is less used in research and industrial purposes for citric acid production. According to Surkesh et al. [13], citric acid could be produced using A. niger isolated from decayed fruit and agronomic wastes such as grapes, orange, apple, vegetable, tapioca or coconut husk as substrate. Different strains of Aspergillus niger and Candida tropicalis could have varying abilities of producing citric acid. Therefore, this study seeks to screen the capability of Aspergillus and Candida spp. to produce citric acid as well as use molecular methods to identify the fungal and yeast isolates with great potential of synthesizing citric acid. MATERIALS AND METHODS Fresh fruits (cucumber and banana) were purchased from markets within Port Harcourt metropolis. Soil samples around Biology Green House as well as compost soil samples inside the campus were obtained using the method described by Pepper et al. [14]. All Isolation and Subculture of Fungal Isolates Serial dilution was performed using the soil samples and agricultural products (cucumber and banana) in accordance with the procedure described by Jalal et al. [15]. Using standard microbiological methods, potato dextrose agar (PDA) containing 10% lactic acid to suppress bacterial growth was used to isolate and subculture the fungal and yeast isolates. Screening and Selection of Fungal Isolates for Citric Acid Production The capabilities of fungal isolates to produce citric acid were determined using the methods adopted by Patil and Patil [16]. After l20 h incubation period at 28 ± 2°C, the culture plate which comprise Czapek-Doxagar medium, 0.5 g Ca 2 CO 3 added to the medium and bromocresol green as an indicator inoculated with spores from each fungal isolate was quantitatively screened for citric production for 3 Days. Positive isolates were identified based on presence of yellow zones around the colonies. The ability of isolated fungi and yeast to produce citric acid was determined by measuring zones of clearance of each isolate. The isolates which showed wide zone of clearance were selected because they were assumed to be the highest producers of citric acid. Acid Unitage (AU) Test The method used by Shaikh and Qureshi [17] was adopted. In determining the acid unitage (AU) of each isolate, the diameter of the yellow zone was divided by the diameter of the colonies. The isolates which recorded high AU were selected for molecular characterization. All the isolates were transferred aseptically into potato dextrose agar (PDA) slants maintained at 4°C throughout the duration of the study. Molecular Identification of Screened Isolates DNA extraction PCR amplification of the fungi 18S rRNA PCR gene and gel electrophoresis of the screened isolates were carried out at the Biotechnology Research Centre, University of Port Harcourt. The PCR product was sent to International Institute of Tropical Agriculture (IITA), Ibadan for sequencing the 18S rRNA. DNA extraction of screened fungal isolates DNA extraction was carried out using a ZR fungal / bacterial DNA mini prep extraction kit supplied by Biotechnology Research Centre, University of Port Harcourt. A heavy growth of the pure culture of the isolates was suspended in 200 µL of isotonic buffer and enough quantity of fungi inoculum was placed in a ZR Bead TM Lysis tubes, into a ZR Bashing Bead Lysis tubes and 750 µL of lysis solution was added to the tubes. The tubes were secured in a bead beater fitted with a 2 ml tube holder assembly (Disruptor Genie TM) and processed at maximum speed for 5 min. The ZR bashing bead TM lysis tube was centrifuged in a micro-centrifuge at 10,000 rpm for 1 min. Four hundred microliters (400 µL) of supernatant was transferred to a Zymo-Spin TM IV Spin Filter (orange top) in a collection tube and centrifuged at 7000 rpm for 1 min. One thousand two hundred microliters (1200 µL) of fungal DNA binding Buffer was added to the filtrate in the collection tubes which brings the final volume to 1600 µL. Eight hundred microliters (800 µL) was then transferred to a Zymo-Spin TM IIC Column in a collection tube and centrifuged at 10,000 rpm for 1 min. The flow through was discarded from the collection tube, the remaining volume was transferred to the same Zymo-Spin TM IIC and centrifuged at 10,000 rpm for 1 min. Two hundred microliters (200 µL) of the DNA Prewash Buffer was added to the Zymo-Spin TM IIC Column in a new collection tube and spun at 10,000 rpm for 1 min. followed by addition of 500 µL of fungal DNA wash Buffer and centrifuged at 10,000 rpm for I min. The Zymo-Spin TM IIC Column was transferred to a clean 1.5 µL centrifuge tube (Eppendorff tubes), and 100 µL of the DNA elution buffer was added directly to the column matrix, centrifuged at 10,000 rpm for 30 sec to elute the DNA. The extracted DNA was then stored at -200 o C for other downstream reactions. The concentration and purity of the extracted genomic DNA of the fungal isolates were estimated using a Nanodrop 1000 spectrophotometer. The absorbance was taken at 260 nm and 280 nm for each sample and the ratio of absorbance at 260 nm and 280 nm were used to assess the purity of the DNA. A ratio of -1.8 is generally accepted as "pure" for DNA while a ratio of -2.0 is generally accepted as "pure" for RNA. PCR amplification of fungi 18S rRNA gene PCR amplification of the 18S rRNA genes of the isolates were amplified using the primer set ITS4 5 -TCCTCCGCTTATTGATATGS -3 and ITS5 5-GGAAGTAAAAGTCGTAACAAGG -3. The reaction was carried out using 25 µL volume containing 6.6 µL of the cocktail mix (Zymo Master Mix), 1 µL each of the forward and reverse primer, mixed with 3 µL of DNA template and sterile 13.4 µL nuclease free water. The sequencing machine used was 3130XL genetic analyzer from Applied Biosystems while the PCR thermal cycler used was GeneAmp PCR system 9700. The PCR cycling parameters were: Initial denaturation at 94°C for 5 min, followed by 36 cycles of denaturation at 94v°C for 30 sec, annealing at 54°C for 30 sec and elongation at 72°C for 45 sec. followed by a final elongation step at 72°C for 7 min. and hold temperature at 10°C. Amplified fragments were visualized on safe view -stained 1% agarose electrophoresis gel. Agarose gel electrophoresis After the PCR reaction, five microliters (5 µL) of the amplified products were separated on a 1% agarose gel. Six hundred base pair (600 bp) DNA ladder was used as DNA molecular weight marker. Electrophoresis was done at 120 V for 20 min. and the gel was visualized using UV transilluminator to determine the size of the DNA of the isolates. Sequencing of amplified fungi 18S rRNA Sanger method and 3130XL genetic analyzer from Applied Biosystems was used to sequence the amplified 18S products. The sequence generated by the sequencer were visualized using Bioformatic Algorithms such as Chromoslite for base calling. BioEdit was used for sequence editing before performing a Basic Local Alignment Search Tool (BLAST) using NCBI (National Center for Biotechnology Information) database (https://blast.ncbi.nim.nih.gov/Blast.cgi). Similar sequences were downloaded and aligned with Cluster X and phylogenic tree was drawn with MEGA 6 software. Construction of phylogenetic tree The evolutionary history was inferred using the Neigbor-Joining method [18]. The bootstrap consensus tree inferred from 500 replicates was taken to represent the evolutionary history of the taxa analyzed [19]. The trees were drawn to scale and branches corresponding to partitions reproduced in less than 50% bootstrap replicates were collapsed. The evolutionary distances were computed using the Jukes-Cantor method considering units of the number of base substitutions per site. All positions containing gaps and missing data were eliminated. Evolutionary analyses were conducted in MEGA6 [20]. Statistical Analysis Two readings were taken and average calculated for acid unit age value of each isolate. Table 1 shows the morphological characterization of fungal and yeast isolates. The isolate code FB1, FC1, FC3, FC4, FC5, SB4, SC1, SC2 and SC3 were identified as Aspergillus niger. Isolate SB1, SB2 and SB3 were identified as Candida tropicalis while Isolate FB2 and FC2 were identified as Aspergillus sp. and Penicillium sp., respectively. Isolate FB1 and FB2 was gotten from fresh banana fruit; isolates FC1, FC2, FC3, FC4 and FC5 from fresh cucumber fruit; isolate SB1, SB2, SB3 and SB4 from soil around Biology Green House while isolates SC1, SC2 and SC3 from compost soil. All the isolates obtained were screened for citric acid production. Our result shows that most of the isolates screened were Aspergillus niger gotten from fresh cucumber. Several studies have demonstrated that Aspergillus niger is chiefly used as industrial species for the production of citric acid. Candida tropicalis has also been implicated in production of citric acid [21,22] [9]. The ability of Aspergillus niger, Aspergillus sp., Penicillium sp. and Candida tropicalis to produce citric acid was evidenced by yellow zones of clearance observed during screening of the isolates for citric acid production. Our results shows that isolates belonging to the same species had different diameters of clearance, colour change and progression in acid unitage (AU). The differences observed could be attributed to different fungal and yeast strains belonging to the same species [21]. Shown in Table 2 is the AU value of fourteen (14) isolates obtained from various sources with potentials of producing citric acid. Isolates SC1, SC2 and SC3 identified as Aspergillus niger from compost soil, isolate FB2 identified as Aspergillus sp. from banana fruit and SB3 identified as Candida tropicalis from soil around Biology Green House had acid unitage value of zero at Day 1. At Day 2, all the isolates had AU values above zero. Isolate SC3 identified as Aspergilllus niger from compost soil had the highest AU value of 8.5 at Day 3. On the contrary, isolate FC2 which is Penicillium sp. from cucumber had the lowest AU value of 2.5. This result is an indication that Penicillium sp. is not preferable for citric acid production. In a related study, Abonama et al. [23] optimized citric acid production using Candida tropicalis. The researchers were able to achieve highest citric acid concentration of 30.0 g/L using most effective conditions. The AU values obtained in this study is similar with the results reported by Lingappa et al. [24] which stated that AU value of ≥ 5.0 should be demonstrated by any good citric acid producing strain. Furthermore, microbial strains which had AU of 3 -5 were regarded as being moderate while those that were below 3 were referred as poor citric acid producers. Their result shows that AU of Aspergillus niger isolates from fruit waste, irrigated soil, municipal solid waste and onions were within the range 4.5 -6.6, 2.0 -4.0, 2.0 -5.6, 3.0 -5.0, respectively. In reference to Lingappa et al. [23], we can categorize isolate SC1, SC3 and FC4 as high citric acid producing strains at Day 3. Meanwhile, isolates FB1, FB2, FC1, FC3, FC5, SB1, SB2, SB4 and SC2 are regarded as moderate citric acid producing strains at Day 3. Only isolate FC2 which is Penicillium sp. at Day 3 is considered as a poor citric acid producing strain. Our result is in agreement with a related study carried out by Lingappa et al. [23] which reported highest AU value for Aspergillus niger isolated from fruit waste which was within the range 4.5 -6.6 compared with AU values of isolates obtained from other sources. Agarose electrophoresis of amplified 18S rRNA gene of fungal isolates is shown in Fig. 1 shown in Fig. 2 is the evolutionary relationship of fungal isolates and their closest Genbank relative screened for production of citric acid while Table 3 shows the sequence identification from NCBI BLAST hits and their percentage relatedness. The evolutionary distances computed in this study is in agreement with the phylogenetic placement of 18S rRNA of isolate ITS4 21 with Aspergillus niger while that of isolate ITS4 22 and ITS4 23 were found to be closely related to Aspergillus welwitschiae and Candida tropicalis, respectively. Growth rate is rapid and textures of colonies are powdery and produced radial tissues in the agar. Surface colony colour was initially white becoming deep brown with conidial production while the reverse is pale yellow. Septate hyphae with globose and radiate conidia heads with metulae that support the phialides; conidiosphores are hyaline and smooth-walled FB2 Surface colony colour is Light green lawn surrounded by white lawnlike growth without radial symmetry Septate hyphae with septate conidiophores bearing conidia Aspergillu ssp. FC1 Growth rate is rapid and textures of colonies are powdery and produced radial tissues in the agar. Surface colony colour was initially white becoming deep brown with conidial production while the reverse is pale yellow. Septate hyphae with globose and radiate conidia heads with metulae that support the phialides; conidiosphores are hyaline and smooth-walled Aspergillus niger FC2 Green powdery surface surrounded by white lawn, brown reverse symmetry Septate hyphae with septate conidiosphores bearing conidia Penicillium sp. FC3 Growth rate is rapid and textures of colonies are powdery and produced radial tissues in the agar. Surface colony colour was initially white becoming deep brown with conidial production while the reverse is pale yellow. Septate hyphae with globose and radiate conidia heads with metulae that support the phialides; conidiosphores are hyaline and smooth-walled FC4 Growth rate is rapid and textures of colonies are powdery and produced radial tissues in the agar. Surface colony colour was initially white becoming deep brown with conidial production while the reverse is pale yellow. Septate hyphae with globose and radiate conidia heads with metulae that support the phialides; conidiosphores are hyaline and smooth-walled FC5 Growth rate is rapid and textures of colonies are powdery and produced radial tissues in the agar. Surface colony colour was initially white becoming deep brown with conidial production while the reverse is pale yellow. Septate hyphae with globose and radiate conidia heads with metulae that support the phialides; conidiosphores are hyaline and smooth-walled Aspergillus niger SB1 Textures of colonies are powdery and brown spores with white flabby edge while the reverse is pale yellow. Pseudo-hyphae with budding blastoconidia Candida tropicalis SB2 Textures of colonies are powdery and brown spores with white flabby edge while the reverse is pale yellow. Pseudo-hyphae with budding blastoconidia Candida tropicalis SB3 Textures of colonies are powdery and brown spores with white flabby edge while the reverse is pale yellow. Pseudo-hyphae with budding blastoconidia Candida tropicalis SB4 Growth rate is rapid and textures of colonies are powdery and Septate hyphae with globose and radiate conidia Aspergillus produced radial tissues in the agar. Surface colony colour was initially white becoming deep brown with conidial production while the reverse is pale yellow. heads with metulae that support the phialides; conidiosphores are hyaline and smooth-walled niger SC1 Growth rate is rapid and textures of colonies are powdery and produced radial tissues in the agar. Surface colony colour was initially white becoming deep brown with conidial production while the reverse is pale yellow. Septate hyphae with globose and radiate conidia heads with metulae that support the phialides; conidiosphores are hyaline and smooth-walled SC2 Growth rate is rapid and textures of colonies are powdery and produced radial tissues in the agar. Surface colony colour was initially white becoming deep brown with conidial production while the reverse is pale yellow. Septate hyphae with globose and radiate conidia heads with metulae that support the phialides; conidiosphores are hyaline and smooth-walled SC3 Growth rate is rapid and textures of colonies are powdery and produced radial tissues in the agar. Surface colony colour was initially white becoming deep brown with conidial production while the reverse is pale yellow. Septate hyphae with globose and radiate conidia heads with metulae that support the phialides; conidiosphores are hyaline and smooth-walled Our results presented in Table 3 confirmed that percentage similarity of Aspergillus niger (EU440768.1), Aspergillus welwitschiae (MG669181. 1) and Candida tropicalis (KT356204.1) to other species was 97.74%, 99.30% and 83.11%, respectively. One interesting finding from this study is an indication that Aspergillus welwitschiae has the potential to synthesize citric acid in addition to A. niger which has been widely used for the same purpose. This result is in agreement with Almousa et al. [5] which reported the use of molecular methods to characterize Aspergillus niger MH368137 which demonstrated ability to produce citric acid in large quantity. Their phylogenetic tree showed Aspergilllus welwitshiae (KT826632.1), Aspergillus welwitschiae (KT826638.1) Aspergillus welwitschiae (KT826640.1), Aspergillus welwitschiae (MH035989.1) and other strains of Aspergillus niger and Aspergillus awamori as potential strains capable of producing citric acid in large quantity. CONCLUSION Among the various sources fungal and yeast isolates screened for citric acid production were obtained from, Aspergillus niger isolated from compost soil demonstrated greatest capability to produce citric acid but Penicillium sp. obtained from cucumber showed poorest ability to do same.
2020-04-23T09:13:31.298Z
2020-04-14T00:00:00.000
{ "year": 2020, "sha1": "ba25d6a844ec959a1ec328516d20a80f614eeb9b", "oa_license": null, "oa_url": "https://www.journaljamb.com/index.php/JAMB/article/download/30232/56694", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b1bcd80432f9f40f95dcfabd65a45579926b23ff", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
7223608
pes2o/s2orc
v3-fos-license
How Noisy is Lexical Decision? Lexical decision is one of the most frequently used tasks in word recognition research. Theoretical conclusions are typically derived from a linear model on the reaction times (RTs) of correct word trials only (e.g., linear regression and ANOVA). Although these models estimate random measurement error for RTs, considering only correct trials implicitly assumes that word/non-word categorizations are without noise: words receive a yes-response because they have been recognized, and they receive a no-response when they are not known. Hence, when participants are presented with the same stimuli on two separate occasions, they are expected to give the same response. We demonstrate that this not true and that responses in a lexical decision task suffer from inconsistency in participants’ response choice, meaning that RTs of “correct” word responses include RTs of trials on which participants did not recognize the stimulus. We obtained estimates of this internal noise using established methods from sensory psychophysics (Burgess and Colborne, 1988). The results show similar noise values as in typical psychophysical signal detection experiments when sensitivity and response bias are taken into account (Neri, 2010). These estimates imply that, with an optimal choice model, only 83–91% of the response choices can be explained (i.e., can be used to derive theoretical conclusions). For word responses, word frequencies below 10 per million yield alarmingly low percentages of consistent responses (near 50%). The same analysis can be applied to RTs, yielding noise estimates about three times higher. Correspondingly, the estimated amount of consistent trial-level variance in RTs is only 8%. These figures are especially relevant given the recent popularity of trial-level lexical decision models using the linear mixed-effects approach (e.g., Baayen et al., 2008). INTRODUCTION Word recognition research often makes use of the lexical decision task (LDT). In this task participants are presented with strings of letters and have to decide whether the letters form an existing word (e.g., BRAIN) or not (BRANK). The main dependent variable is the decision time of the correct yes-responses to the word trials. A secondary variable is the decision accuracy. Originally it was thought that lexical decision performance was a pure measure of lexical access (i.e., the time needed to activate individual word representations in the mental lexicon; see Balota and Chumbley, 1984, for references to this literature). Later it became accepted that lexical decision times are also affected by the similarity of the presented word to the other words of the language (i.e., the total activation in the mental lexicon, usually defined as the number of words that can be formed by replacing a single letter of the original word; Grainger and Jacobs, 1996) and the degree of similarity between the word and non-word stimuli (Gibbs and Van Orden, 1998;Keuleers and Brysbaert, 2011). The primacy given to reaction times (RTs) over decision accuracies reflects the fact that language researchers are primarily interested in the speed of word recognition rather than the precision of the process (given that in normal reading next to all words are recognized). In the vast majority of studies, RTs of correct word responses are modeled as a linear combination of a few fixed predictor variables and random measurement error. The estimate of the latter component represents the expected RT fluctuation with respect to repeated sampling (i.e., to what degree RTs can be expected to vary in a replication of the experiment). However, when one estimates fixed and random effects for RTs in this way, it is assumed that the response level is fixed and thus will not vary across different replications of the same experiment. Participants respond "yes" because they have recognized the word, and they respond "no" to those words they do not know. In other words, a correct response is assumed to be fully reliable with respect to repeated sampling. To ensure valid RTs, participants and word stimuli are selected in such a way that overall performance accuracy is higher than 80-90% (that is, the words are selected so that they are known to most of the participants). Thus, statistical models of lexical decision experiments typically take measurement error into account with respect to decision times, but they assume this error to be zero for the actual decision itself. This notion, which is routinely adopted in lexical decision research, does not take into full account an established result in psychophysical research, namely that a large part of the variance in individual response choice reflects internal cognitive noise. Because of this noise, measurements of both response time and response choice vary to some extent when individuals respond to the same stimuli on repeated occasions. Psychophysicists investigate this source of noise by examining the probability distribution of responses to a particular stimulus rather than assuming that each response is a veridical, fixed estimate of stimulus processing difficulty. When fitting models to predict an individual's "correct" behavior, they then accept that the success of doing so depends on the amount of internal noise or internal consistency, which limits the amount of variance one can aim to explain. For a long time, psycholinguists have avoided the issue of internal noise by averaging data across a number of different experimental trials, which leads to analysis-units (i.e., means) with smaller standard errors, but the issue is becoming increasingly relevant as more and more researchers are beginning to examine RT distributions instead of point estimates (e.g., Yap et al., 2012) and are using statistical analyses based on individual trials instead of aggregated ones (e.g., Baayen et al., 2008). One solution for lexical decision research could be to perform the data-analysis with mathematical models that, for a given trial, predict the RT, and response choice, including estimates for both RT-and response level measurement errors. Unfortunately, such models (e.g., Ratcliff et al., 2004) are currently not as developed as the linear framework, meaning that they do not yet provide ready estimates for multiple fixed effects and multi-level random structures with reasonably scaled data sets (e.g., Pinheiro and Bates, 2000;Rigby and Stasinopoulos, 2005). Another reason why the linear framework is popular is that noone knows how large the internal noise is and, therefore, to what extent the assumption of fixed responses is unwarranted. Most researchers will acknowledge that assuming zero-measurement error for response categories is most likely wrong, but a formal analysis of the degree of internal noise in a LDT is lacking. To fill this gap, in the present manuscript we opt for a general approach borrowed from the psychophysical literature (Burgess and Colborne, 1988;Ahumada, 2002;Neri, 2010). In this line of research, participants are asked on each trial to discriminate a signal + noise stimulus (e.g., a target letter embedded in unstructured information) from a noise-alone stimulus (i.e., the unstructured information alone). In the first half of the experiment each trial presents new information; we refer to this part as the "first pass." In the second half, the stimuli of the first pass are repeated (albeit often in a different order) and participants have to respond to them again; we refer to this part as the "second pass." The inclusion of two passes with the same information allows researchers to compute two quantities: the percentage of trials on which the observer responded correctly (i.e., correctly identified the signal; defined as ρ), and the percentage of trials on which the observer gave the same response to a given trial on both the first and the second pass (defined as α). Burgess and Colborne (1988), Ahumada (2002), and Neri (2010) outlined how these two quantities can be used to estimate the amount of internal noise associated with the observers' stimulus processing. The model developed by Burgess and Colborne (1988), Ahumada (2002), and Neri (2010) represents a variant of standard signal detection theory (SDT, Green and Swets, 1966). In this model internal responses to external stimuli are assumed not only to reflect external noise (i.e., noise associated with the stimulus and having standard deviation σ N ), but also internal processing noise (defined as σ I ). Specifically, internal responses to noise (r N ) and signal + noise stimuli (r S+N ) are modeled as follows: As a result, the internal responses to external noise are assumed to be normally distributed with mean µ N and standard deviation determined by both σ N (external noise) and σ 1 (internal noise). On signal + noise trials a fixed internal value S is added. The contribution of external noise in Eqs 1 and 2 can be neutralized by normalizing both equations with respect to external noise (this is done by subtracting µ N and dividing the outcome by σ N , as for the calculation of z-scores). The outcome gives: In these equations, the internal noise and the internal signal strength are expressed in units of external noise The normalized internal signal strength d in is called the signal detectability index or input sensitivity. Burgess and Colborne (1988) showed how the parameters in Eqs 3 and 4 can be derived from the values of ρ and α in a doublepass design. More specifically, they showed that good estimates for d in and σ I can be obtained through minimizing the mean-square error between the predicted and observed values for ρ and α (see also below). Neri (2010) observed that the internal noise across a wide range of perceptual tasks followed a lognormal distribution with γ = 1.35 ± 0.75 SD. The fact that the internal noise exceeded the external noise (i.e., γ > 1) was surprising, as it suggested that psychophysical choice is affected more by internal perturbations than by external variation. Indeed, in the first study, Burgess and Colborne (1988) expected less internal noise than external noise and obtained a ratio of γ = 0.75 ± 0.1 SD. Neri (2010) explained the discrepancy between his finding and Burgess and Colborne's by pointing out that the experiments in his review included more complex tasks than the one used by Burgess and Colborne. Indeed, when Neri (2010) restricted his analysis to low-level perceptual tasks, he obtained a value of 0.8 consistent with the earlier study of Burgess and Colborne. With more complex tasks, however, he observed γ-values larger than one. These results from the psychophysical literature may have important implications for the LDT. Given that this is a complex task, one may expect larger internal noise than external noise, meaning that participants are rather inconsistent in their responses, answering"non-word" to stimuli in the second pass that received a "word" response in the first pass (and vice versa). If the findings from the psychophysical literature (Green, 1964;Burgess and Colborne, 1988;Neri, 2010) generalize, then we can expect that only 70 to 84% of the lexical decisions will be consistent. This means that, even if we had access to the best possible model of how participants operate, we would be able to predict only about three quarters of the trial-level data. Such a finding would clearly be at odds with the implicit assumption in psycholinguistics that a yes-response to a word trial can be interpreted as evidence that the person knows the word and has recognized it (i.e., that the response choice is error-free). On the other hand, some features of a typical LDT may make it more robust against the degree of inconsistency reported for complex psychophysical tasks (Neri, 2010). A potentially relevant factor in this respect is that in a typical lexical decision experiment stimuli are shown until the participants respond (usually for a maximum of 1500 ms). This mostly results in percentages of correct responses (averaged across easy and difficult words/non-words) of more than ρ = 0.9. The LDT protocol differs from customary practice in SDT experiments, where the signal-to-noise ratio of the stimulus is selected to target a threshold output sensitivity of d = 1 (Green and Swets, 1966) 1 . Indeed, in the experiments surveyed by Neri (2010) the average accuracy was about ρ = 0.75 (observers responded correctly on three out of four trials), which corresponds to a d -value close to unity (following the equation where Φ is the cumulative standard normal distribution function). It is not inconceivable that response consistency is higher for clearly visible stimuli than for briefly presented stimuli, which may result in lower internal noise values for LDT than psychophysical tasks of comparable complexity. Although this argument seems plausible, it would be better, of course, if it were based on explicit empirical testing rather than on a tacit assumption. Hence, the present experiment. In addition, the psychophysical approach introduced by Burgess and Colborne (1988) and Neri (2010) can be extended to RTs. Although this kind of analysis has not been reported before and is not established in the psychophysical literature, there are no a priori theoretical objections precluding it. All that is needed is a situation in which participants respond twice to a sufficiently large sequence of words and non-words. The new analysis is interesting because it only assumes that each RT value is the sum of two components (one stimulus-dependent and one person-related). There is no need to make further assumptions about the distribution of the components (see Materials and Methods), so that the approach is extremely general, encompassing all models based on a quasi addition of stimulus-dependent and person-related variability. In summary, we will apply the psychophysical analysis method introduced by Burgess and Colborne (1988) and Neri (2010) to the LDT. This will allow us (1) to find out to what extent the implicit psycholinguistic assumption of error-free word-and non-word responses is warranted, and (2) to determine the degree of consistent trial-level variance that can be explained in RTs. To foreshadow our findings, we will observe that the contribution of internal noise to lexical decision is much larger than commonly assumed. This is particularly the case for response-selection to low-frequency words and for RTs. MATERIALS AND METHODS Data were obtained from the Dutch Lexicon Project (Keuleers et al., 2010b). In this study, 39 participants responded to 14,339 word trials and 14,339 non-word trials, which were presented in 58 blocks of 500 stimuli (the last block was shorter). Participants responded with their dominant hand when they thought a word was presented, and with their non-dominant hand otherwise. Importantly, to gage practice effects in the study, the sequence of stimuli in block 50 was identical to that of block 1. As participants could only finish four blocks in an hour and rarely did more than six blocks each day, for most participants there were several weeks (and over 20 K lexical decision trials) between the first and the second run. In this way, the results were unlikely to be influenced by repetition priming effects and other influences due to episodic memory. Indeed, Keuleers et al., 2010b, Figure 1) found that the increase in response speed and accuracy across block 1 and 50 was very modest. For word responses participants were on average 35 ms faster and 5% more accurate. For non-words there was a 22 ms decrease in response speed, but accuracy was 2% worse in block 50. Because participants got different permutations of the complete stimulus list, the words each one saw in blocks 1 and 50 were a unique subsample of the stimulus list. As a result, the analyses presented below are not limited to a particular section of the stimulus list (which would have been the case if the words had been the same for every participant). Therefore, the characteristics of the stimuli are the same as those of the Dutch Lexicon Project as a whole (see Keuleers et al., 2010b, Table 1, for a summary and a comparison with the lexicon projects in other languages). Further of importance for the present analyses is that participants were not allowed to drop consistently below 85% overall accuracy (otherwise they were asked to leave and did not receive the full financial reimbursement). Such accuracy requirements are standard in lexical decision, where the data of participants with, for example, more than 20% errors are discarded. Accuracy was higher for non-words (94%) than for words (84%), as can be expected from the fact that not all words were known to the participants (some had very low frequencies of occurrence). The LDT conforms to a yes-no design (Green and Swets, 1966): a word (target) or a non-word (non-target) is presented on every trial, and the participant is asked to choose between these two possibilities. So, the analysis proposed by Burgess and Colborne (1988) can be applied. A complicating factor, however, is that the equations outlined by Burgess and Colborne require the absence of response bias (i.e., participants are not more likely to select one response than the other). In the Dutch Lexicon Project, there was a small response bias toward non-word responses (−0.31), which was statistically significant [t (38) = −10.46, p < 0.001]. Luckily, Ahumada (2002, Eqs 3.1.6 and 3.1.7) derived the equations needed to estimate internal noise under conditions of potential bias. These are (the reader is referred to the original publication for details on how the equations were derived): www.frontiersin.org FIGURE 1 | Illustration of the SDT model with internal noise applied to the lexical decision task. Word and non-word stimuli map onto the stimulus intensity/internal response dimension with the same normal variance, but with different means (i.e., the word stimuli are on average more word-likely than the non-word stimuli; (A) because of the internal noise source (B), however, the internal responses to the same stimuli differ to some extent between repetitions (D); the x axis represents time, the y axis the internal response: e.g., up is evidence toward a word response, down is evidence toward a non-word response) and the word/non-word responses will not be fully consistent (C). Please refer to Methods for further details. In these equations the following notation is used: p [s,r] is the proportion of trials on which the observer responded r (0 for non-word, 1 for word) when presented with stimulus S (0 for nonword, 1 for word); p * [s,r] is the proportion of trials on which the observer responded r to both passes of stimulus S; Φ is the cumulative standard normal distribution function; φ is the standard normal distribution density function; γ is the standard deviation of the internal noise source in units of the external noise standard deviation i.e., σ I σ N . For non-mathematical readers, it may be good at this moment to flesh out the model to some extent. The model basically assumes that there are two stimulus categories (words and non-words), which map onto a single quantity, which can be called "the degree of wordness" (the x axis of Figure 1A). The distribution of stimuli belonging to the word category is assumed to have a higher mean value of wordness than the non-word category, but to have the same standard deviation (Figure 1A). Because of the variability in each category, the wordness distributions of both categories partly overlap (i.e., some non-words have a higher degree of wordness than some words). The variability introduced at this stage is called external noise, because it is driven by the external stimulus (the degree of wordness each word and non-word in the experiment has). The model further assumes that the wordness intensity of a stimulus is mapped onto a corresponding quantity within the observer's brain, which preserves the original structure of the input (the black lines in Figure 1B). However, the output of this mapping is not error-free due to internal noise. As a result, the variability of the quantities in the observer's brain is larger than the variability of the stimulus intensity levels (the gray lines in Figure 1B). This is true as much for words as for non-words. Furthermore, the variability introduced by the internal noise source is decoupled from the stimulus, so that the output of the internal representation in response to two presentations of the same stimulus need not be the same. As a result, the internal responses to a given sequence of stimuli will contain the repetitive structure present in the stimulus sequence (due to the degree of wordness of each stimulus; black traces in Figure 1D), but in addition it will contain some non-repetitive structure due to the variability introduced by the internal noise (the gray traces in Figure 1D). Finally, the SDT model assumes that observers set a threshold value for converting the output from the internal representation into a binary response of the word/non-word type. If the internal representation exceeds this threshold (indicated by horizontal line in Figure 1D) they respond 'word' , otherwise they respond "non-word." Frontiers in Psychology | Language Sciences From the response sequences in the first and the second pass we can compute the quantities needed for Eqs 5 and 6, namely p [0,0] and p [1,0] , p * [0,0] and p * [1,1] . On the basis of these quantities, we can then estimate the internal noise intensity (γ) that minimizes the mean-square-error between predicted and observed p * [0,0] and p * [1,1] given p [0,0] and p [1,0] . If the sequence of responses in the first and the second pass is exactly the same, the best estimate of γ will be 0, because there is no internal noise (the responses are fully driven by the wordness values of the stimuli). Conversely, the more the sequences of responses differ between first and second pass, the higher the estimated γ-value must be to account for the absence of consistency. To estimate the degree of internal noise in RTs, we simply assumed that the observed RTs were the sum of two processes, one related to the stimulus and one decoupled (i.e., independent) from the stimulus. The former can be thought of as the stimulus-induced internal representation in Figure 1B (black trace); the latter as the participant-dependent internal noise (gray trace). The predicted pattern of RTs then is the same as in Figure 1D: RTs are assumed to consist of a component identical in both passes (black traces), together with a component differing between the two passes (gray traces). It is easy to show that the correlation coefficient R between the two sequences of RTs then equals where σ I is the standard deviation of the internal noise source and σ N the standard deviation of the external stimulus. The quantity we are interested in is the ratio γ = σ I σ N , i.e., the intensity of the internal noise source in units of standard deviation in the degrees of wordness. This is easily obtained by 1 R − 1 . Before calculating R, we inverse transformed all RTs (i.e., −1000/RT) to correct for the positive skew in the RT distribution (Ratcliff, 1993). Finally, we calculated the output sensitivity d = Φ −1 p [1,1] − Φ −1 p [0,1] for each participant. RESULTS Our main findings are summarized in Figure 2. Starting with Figure 2A, we notice that sensitivity (d ) is considerably higher (at a value of about 2) in the lexical decision experiment than the value of 1 typically targeted in SDT experiments (Neri, 2010). This is not surprising given that SDT experiments emphasize threshold visibility, whereas lexical decision experiments emphasize clear visibility of the stimulus. A more interesting feature of Figure 2A is that the internal noise estimates (x axis), expressed as γ = σ I σ N , are below 1 for nearly all participants (typically around 0.6), indicating that internal noise (σ I ) was smaller than external noise (σ N ). As indicated in the Introduction, this relatively low estimate is to be contrasted with the average value of 1.3 reported by Neri (2010) for complex psychophysical tasks. It therefore appears that, despite the taxing cognitive demands associated with LDT, internal noise in a lexical decision experiment is relatively low and does not exceed the external noise source (i.e., γ < 1). At the same time, the impact of internal noise is not zero, as assumed by psycholinguists. There is some degree of inconsistency in the response selections made by the participants in the first and the second pass. Not all stimuli that were "recognized" as words in the first pass were also "recognized" in the second. Similarly, not all stimuli that failed to elicit a word response in the first pass were considered as non-words in the second pass. Further interesting is the observation that sensitivity correlated negatively with γ across individuals [R = −0.57, |t (37)| = 4.18, p < 0.001]. So, the most accurate participants showed the smallest γ-values (Figure 2A). This is in line with the hypothesis that the low degree of internal noise we observed in the LDT was partly due to the fact that only participants with good knowledge of the words were included in the study. Indeed, if we extrapolate the linear regression line between sensitivity and γ to d = 1, the predicted value of γ falls within the range reported for sensory processing (i.e., 1.35 ± 0.75 SD, see Neri, 2010), suggesting that the degree of internal noise in lexical decision is comparable to non-verbal perceptual tasks when task difficulty is matched. Figure 2B shows that the internal noise is higher for words than for non-words [most of the points fall below the solid unity line; M = 0.32, |t (38)| = 4.45, p < 0.001]. This is in line with the observation that accuracy was higher for non-words than for words (see above). A difference in accuracy is also the most likely explanation for why internal noise was higher for low-frequency words than for high-frequency words [M = 0.27, |t (38)| = 1.91, p < 0.05; Figure 1C]. Participants were less accurate on trials with words that had a frequency of less than 1 occurrence per million words than on trials with higher-frequency words, and for these words they showed higher γ-values. In other words, internal noise shows a tendency to scale inversely with accuracy (non-words < highfrequency words < low-frequency words). Noise estimates are highest with low-frequency words and fall within the range reported for perceptual tasks (Neri, 2010; 1.05 ± 0.88 SD vs. 1.35 ± 0.75 SD). We also observed a significant positive correlation between the internal noise values on word and non-word trials [R = 0.35, |t (37)| = 2.28, p < 0.05], but not between internal noise values for high-and low-frequency words [|t (37)| < 1]. Finally, Figure 2D shows that γ was much higher for RTs than for response choice [M = 1.19, |t (37)| = 15.34, p < 0.001] with a significant positive correlation between the two estimates [R = 0.54, |t (36)| = 3.85, p < 0.001]. More specifically, γ was about three times higher for RTs (values around 1.8) than for response choices (values around 0.6). When estimated from RT data, none of the participants showed lower internal noise than external noise (i.e., all γ > 1). Further analyses indicated that there were no significant differences or correlations for RT-based internal noise as a function of lexicality (word|non-word) or word frequency x < 1 vs x ≥ 1 per million . It might be objected that all of the above-detailed measurements rely on a comparison between only two passes of the same set of stimuli. Two questions naturally arise in relation to this approach. First, are the internal noise estimates biased for low number of passes, i.e., is it expected that lower estimates may be obtained with a multi-pass procedure that employs >2 passes? Second, if the estimates are not biased, what is their precision? In relation to the former question, there is no a priori reason to expect that estimates should be biased depending on the number of passes involved; in support of this notion, multi-pass methods with >2 passes have reported internal noise www.frontiersin.org estimates that are within the same range reported with doublepass methods (Li et al., 2006). With relation to the latter question, recent work (Hasan et al., 2012) has estimated the precision of the double-pass method to be in the range of 10-20% depending on the number of trials and observers associated with the measurements. The conclusions we draw in this article are valid within a range of error that is well within the above precision value. DISCUSSION Researchers using LDTs are typically making theoretical claims on the basis of correct word trial RTs only. The linear statistical models adopted in these studies assume random measurement error for RTs, but not for response choices. It is also not taken into account to what degree random RT fluctuations reflect participant-internal (i.e., cognitive) or merely external noise. The fact that these models assume that the actual choice for a word/non-word response is fixed, i.e., the product of an errorfree system, is potentially problematic toward valid theoretical conclusions. Decisions are supposed to be 100% reliable: participants respond "yes" because they have recognized the word, and they respond "no" to the stimuli they do not know. This notion stands in sharp contrast with results from psychophysical research showing that internal noise introduces considerable inconsistency across identical trials (Burgess and Colborne, 1988). Our goal in this study was to bridge the gap between lexical and psychophysical research traditions by analyzing the data of a recently collected, large-scale lexical decision experiment using statistical techniques based on SDT (Burgess and Colborne, 1988;Ahumada, 2002;Neri, 2010). We profited from the fact that the first block of 500 trials in the Dutch Lexicon Project (Keuleers et al., 2010b) was repeated in block 50, allowing us to measure the consistency of word/non-word choices and RTs to the same stimuli; we then used these measurements to derive corresponding Frontiers in Psychology | Language Sciences internal noise estimates. Our analyses clearly document that the assumption of a noiseless decision process in the LDT is unwarranted. The amount of internal noise was substantial, not only with respect to RTs but also when computed from word/non-word choice data. The most prominent implication of this result is that the ability to model trial-level lexical decision data is in fact more limited than is perhaps appreciated by most researchers in this field. According to our analysis, when participants are presented with the same lexical decision trials on different occasions, they will produce the same word/non-word choice on only about 83% of the trials (9% SD). This implies that an optimal choice model (i.e., a model that faithfully replicates the cognitive process used by the participant) would only be able to predict in-between 83 and 91% of the observed responses 3 (Neri and Levi, 2006). The situation is even worse for RTs. The ratio of internal vs. external noise was considerably larger (about three times) for RTs than for the choice data (see Figure 2D). From the squared correlation across the two blocks we learn that only about 8% of the variance in the (correct and consistent) RTs was replicated (R 2 = 0.08 ± 0.04 SD) 4 prompting us to maintain modest expectations about our ability to predict trial-level RT data via, for instance, linear mixed-effects models (e.g., Baayen et al., 2008) or explicit computational models (e.g., Balota and Spieler, 1998;Seidenberg and Plaut, 1998) Our analyses point to another issue. Based on the measured percentages of response choice agreement, internal noise was significantly larger for words than for non-words ( Figure 2B). There was 90% agreement for non-word trials, compared to only 76% for words. The relatively poor agreement for words appears to be due to the low frequency words. This can be seen clearly when we predict the trial-level agreement data on the basis of (logarithmic) word frequency values through a mixed model with a logistic link function where participants and stimuli are used as crossed random factors. Figure 3 shows estimates for lower and upper bounds of the optimal model performance (Neri and Levi, 2006) as a function of word frequency (to model the nonlinearity frequencies were expanded into natural splines). The graph illustrates that optimal performance is quite high (and similar to non-words at 90-95%) for frequencies above 10 per million, but drops to near 50% (chance) for the words with the lowest frequencies. 3 The lower bound is given by the percentage of agreement α and the upper bound by the formula 1+ √ 2α−1 2 . 4 This within-participant replication compares to an average between-participant replication of R² = 0.02 ± 0.02 SD. 5 Psycholinguists typically deal with the high level of noise in RTs by taking mean RTs across a group of participants (usually around 40). Rey and Courrieu (2010) reported that this practice indeed increases the reliability of the RT-values of the Dutch Lexicon Project to 84%. It is important to keep in mind, however, that this value represents the replicability of variance at the level of item RTs averaged over a group of participants. The present analysis shows that at the individual level, only some 8% of the variance is systematic variance. Especially in the context of computational models, it is critical to ask whether predictions should be made at the levels of average or individual data. In the latter case it may be more sensible to correlate the average performance over several runs of the model with the average performance of a group of participants. The observation of high accuracy for most non-words and high-frequency words, together with decreasing accuracy for lowfrequency words is in line with a SDT model containing two response criteria (Krueger, 1978;Balota and Chumbley, 1984). In such a model, a low criterion is placed at the low end of the higher distribution of stimulus intensities (i.e., at the low end of the wordness values of the words in Figure 1A), and a high criterion is placed at the high end of the lower distribution (i.e., at the high end of the wordness values of the non-words in Figure 1A). Stimuli with wordness values below the low criterion elicit fast non-word responses, because virtually no words have such low values. Similarly, stimuli with wordness values above the high criterion get fast word responses, because there are virtually no non-words with such high values. Stimuli with intensity values between the low and the high criterion (for which it is not immediately clear which decision to make) get further verification processing or elicit a random response. Interestingly, the frequency value of 10 per million is the value below which the bulk of the RT word frequency effect in the Dutch Lexicon Project is situated (Keuleers et al., 2010b). This agrees with Balota and Chumbley's (1984) warning that a large part of the word frequency in LDT may be due to the decision part and not to differences in word processing speed, even though there is evidence that the frequency effect is not completely absent from the word processing part (Allen et al., 2005). This once again points to the possibility that LDT data may say as much (and possibly more) about the task that is performed (binary decision) than about the process psycholinguists are interested in (the speed of word recognition). After all, in normal reading the job is not to decide the wordness of each letter string, but to activate the correct meanings of the letter strings. This discrepancy between reading and LDT is particularly worrying, given the low correlation we recently observed between lexical decision times and gaze durations to the same words in fluent www.frontiersin.org text reading . Further complicating the picture is the finding that the correlation between RTs in LDT and gaze durations in reading is higher when the words are not part of continuous text but positioned in unconnected, neutral carrier sentences (Schilling et al., 1998). Clearly, more research is needed here to chart the commonalities among the tasks and the divergences. A further sobering fact is the high internal noise we found for RTs. This was even true for the high-frequency words. Even though the high optimal model performance based on response accuracies (Figure 3) suggests that for these words RTs can be interpreted as the outcome of true word processing, we found no evidence that the internal/external RT noise ratio for these words was significantly lower than for low-frequency words. Our estimate of 8% for the optimal model performance with respect to RTs appears to apply irrespective of word frequency. This was true both in an analysis with a distinction between words with frequencies higher or lower than one per million, and in a more fine grained analysis attempting to predict the squared difference between trial-level RTs in the first and second pass (using a mixed model with participants and stimuli as crossed random factors and allowing for non-linearity via natural splines). On a more general level, our analyses demonstrate considerable overlap between the LDT and psychophysical signal detection tasks. It appears that the degree of internal noise relative to the level of external noise is comparable between the two classes of tasks provided sensitivity is matched. The primary reason why the ratio is smaller in lexical decision than in representative psychophysical tasks (Neri, 2010) seems to be the higher visibility of the stimuli in lexical decision. It is relevant to this discussion that the inverse relation between internal noise and sensitivity we report for the lexical task (Figure 2A) has also occasionally been observed in some perceptual tasks (see Figure 4B in Burgess and Colborne, 1988) but not in others. As for the latter, Neri (2010) reported no correlation between sensitivity and internal noise for the datasets considered in his article (see also Gold et al., 1999). However, the range of sensitivity values spanned in these datasets was smaller than the one we report here for LDT (most data points in Neri (2010) fell below a value of 2 whereas our sensitivity data are mostly above 2, see the y axis in Figure 2A). This difference in range may account for the lack of correlation reported by Neri (2010), and points to the importance of establishing whether the relation between sensitivity and internal/external noise ratio represents a fundamental property of the human cognitive system that applies to a broader range of different choice paradigms or whether it presents different characteristics across cognitive tasks. Not just for this reason, but also for the purpose of generally becoming more aware of the importance and impact of internal in/consistency, we believe it is critical to take the current analyses to different areas of cognitive research. The similarity of lexical decision to other signal detection tasks illustrates the utility of using mathematical models of lexical decision that include noise both at the RT and response choice level. Models of this kind are being developed (see in particular the drift diffusion model of Ratcliff et al., 2004; also see Norris and Kinoshita, 2012), but at present they do not provide the same flexibility of data-analysis as the linear models (e.g., Rigby and Stasinopoulos, 2005). It will be interesting to see to what extent these models will be able to simultaneously account for the usual factors influencing word processing and the degree of noise observed in the present study (for example with respect to the frequency curve, as shown in Figure 3). To summarize, we have for the first time analyzed the level of internal noise associated with response choice and RTs in the LDT. The results show lower internal noise values for response choice than for RTs. Non-word choices and word choices for words with a frequency above 10 per million are especially consistent. The results for words with frequencies of less than 10 per million words indicate a substantial degree of guessing, seriously questioning the validity of RT data for these stimuli -at least with the LDT. An optimal response choice model could reach more than 90% accuracy for non-words and high-frequency words, whereas an optimal RT model would only explain about 8% of the triallevel data, irrespective of word frequency. It is important to keep these figures in mind when data are analyzed with linear models, because there is no way of directly estimating them in the usual single pass lexical decision experiment. It will also be interesting to understand the extent to which models that do not assume fixed response choices will be able to account for the present findings. CONCLUSION We ran a signal detection analysis on the responses in a LDT (both response choices and RTs) to have a quantitative estimate of the noise in this task. Given that we found rather high levels of noise under some circumstances, these are the implications we see for researchers using LDT to investigate word processing: 1. LDT is a signal detection task with a rather high degree of noise, also in response choices, implying that not all word responses come from trials in which the participant recognized the stimulus as a known word. This is particularly the case for words known by less than 80-90% of the participants, and for participants who know less than 80-90% of the words. In these cases, rather high percentages of word responses seem to be guesses that turn into non-word responses when the block of stimuli is repeated. 2. Because of the noise in the response choices, RTs of "correct" responses should be treated cautiously if they come from conditions with more than 10% errors. This may be an issue, for instance, when data are compared across tasks. 3. If authors want to base their conclusions on RTs, they are advised to make sure the stimuli are known to their participants. Possible sources for this are the percentages known in the English Lexicon Project (Balota et al., 2007;40,000 words) and the British Lexicon Project 28,000 words). Another variable to take into account in this respect is the vocabulary size of the participants (Diependaele et al., 2012;Kuperman and Van Dyke, 2012). 4. The good performance for well-known words and for most non-words suggests that two response thresholds are used in LDT. This finding may be worthwhile to integrate in computational models of the task (Davis, 2010;Dufau et al., 2012).
2016-06-18T01:06:51.103Z
2012-09-24T00:00:00.000
{ "year": 2012, "sha1": "58d624be6c2bfd4e0e3396f1755a5418f7fd4ccc", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2012.00348/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4366593903c95a4cf97b79adece3b8745f0ae310", "s2fieldsofstudy": [ "Linguistics", "Psychology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
36452593
pes2o/s2orc
v3-fos-license
Anisohypermetropia as a sign of unilateral glaucoma in the pediatric population Childhood glaucoma poses a diagnostic and therapeutic challenge to ophthalmologists. Difficulty in examination and limitations on ability to perform structural and functional testing of optic nerve make diagnosis and verification of glaucoma control difficult in children. It is well known that an excessive loss of hyperopia is a useful sign in alerting the examining ophthalmologist to the possible diagnosis of glaucoma. We present an interesting case of juvenile onset glaucoma presenting with anisohypermetropic amblyopia in one eye and normal vision in the fellow eye that has glaucoma. It is an unusual case as the left eye with abnormal vision from hypermetropic amblyopia, though by itself requiring treatment, was a red herring for a potentially blinding condition in the fellow eye with normal vision and lower and less amblyogenic hyperopia on examination. We believe that glaucomatous enlargement of the right eye resulted in significant loss of hyperopia in that eye and in turn contributed to anisohypermetropic amblyopia in the left eye. To the best of our knowledge, this is the first reported case of juvenile onset glaucoma presenting with anisohypermetropic amblyopia in one eye and normal vision in the fellow eye that has glaucoma. Introduction Childhood glaucoma poses a diagnostic and therapeutic challenge to ophthalmologists. Difficulty in examination and limitations on ability to perform structural and functional testing of optic nerve make diagnosis and verification of glaucoma control difficult in children. We have found an excessive loss of hyperopia to be a useful sign in alerting the examining ophthalmologist to the diagnosis of juvenile open-angle glaucoma (JOAG). Case A 6-year-old Japanese boy with an unremarkable birth and medical history was referred to our clinic for decreased vision in the left eye that was picked up during routine health checkup in school. Otherwise, he did not complain of any blurred vision. His parents noticed he adopted a left face turn, when watching television, of ~1 year duration. There was no significant family history of ocular disease, and his parents were emmetropic. On examination, visual acuity was 6/6 in the right eye and 6/45 in the left eye. There was no relative afferent pupillary defect, and eyes were orthotropic with full extraocular motility. Anterior segment was unremarkable with clear corneas and deep anterior chamber. Cyclorefraction was performed revealing hypermetropic anisometropia of +1.5 D sphere in the right eye and +5.5 D sphere in the left eye. Initial impression was left amblyopia secondary to high hyperopia; however, careful dilated fundal examina- 204 Tan et al tion also revealed optic disc asymmetry of cup-disc ratio 0.6 in the right eye and 0.4 in the left eye ( Figure 1). Intraocular pressures (IOPs) were then measured by iCare tonometer and found to be averaging 35 mmHg and 15 mmHg in the right and left eye, respectively, with repeated examinations demonstrating similar abnormally high readings in the right eye. Horizontal corneal diameters were ~11.0-11.5 mm in the right eye and 11.0 mm in the left eye. Axial length was 22.65 mm in the right eye and 21.14 mm in the left eye, and central corneal thickness was averaging 0.594 mm in the right eye and 0.578 mm in the left eye. A diagnosis of right JOAG and left anisometropic amblyopia was made. He was started on Gutt Cosopt bd in the right eye, and also prescribed glasses, with advice for patching of the right eye 6 hours a day. On review about a week later, his IOP was normal at 9 mmHg in the right eye and 11 mmHg in the left eye. He was reviewed regularly with IOP maintained in the low teens while on Gutt Cosopt bd. Optic nerve head topography on Heidelberg retinal tomograph (HRT) and optical coherence tomography of the optic nerve head and peripapillary retinal nerve fiber layer were normal except for rim area asymmetry noted on HRT (Figure 2A and B). Humphrey visual field was attempted but unreliably performed ( Figure 3A and 3B). Patching of the left eye was eventually discontinued as amblyopia resolved with visual acuity improving to and maintaining at 6/6. The patient's parent has given written informed consent to have the case details and accompanying images published. Discussion Childhood glaucoma is an uncommon pediatric condition often associated with significant visual impairment. 1 According to the latest consensus by the World Glaucoma Association, childhood glaucoma is classified as primary or secondary. Primary congenital glaucoma and JOAG constitute the primary childhood glaucoma. 3 Primary congenital glaucoma is caused by isolated angle anomalies and consists of 3 subcategories categorized by age of onset -neonatal (0-1 month), infantile (>1-24 months) and late onset (>2 years). 3 JOAG presents anywhere from childhood to early adulthood and presents much as adult primary open-angle glaucoma. 3 There is no consensus on the age limits for diagnosing JOAG, and the difference between adult and juvenile forms of open-angle glaucoma based on age is regarded as arbitrary. 4 Secondary childhood glaucomas are associated with non-acquired ocular or systemic anomalies. JOAG is an uncommon subset of pediatric glaucoma and is usually transmitted in an autosomal dominant fashion, most commonly involving the myocilin protein. 5,6 Myocillin protein is found in trabecular meshwork cells, trabecular beams and juxtacanalicular connective tissues, mutations of which lead to accumulation of misfolded proteins and endoplasmic reticulum stress that compromise the trabecular meshwork cells that regulate IOP. 7 Two types of juvenile glaucoma have been described; one is not associated with any gonioscopic abnormalities and the other is associated with iridocorneal angle abnormalities and is termed "goniodysgenesis". [8][9][10][11] A recent population-based study reported the incidence of JOAG to be 0.38 per 100,000 residents between 4 and 20 years of age, and an epidemiological study from Dallas Glaucoma Registry reported that JOAG comprised ~4% of all childhood glaucomas. 12,13 There have been reports of a male preponderance, association with myopia and severe elevation of IOP with large diurnal fluctuations. 8,14-21 Early identification and treatment of glaucoma in children are vitally important as these patients have longer life expectancy than the typical glaucoma patients. However, early diagnosis may be difficult for many reasons. Careful examination of the optic nerve head, measurements of IOPs and visual field assessment are often challenging in these young patients. Furthermore, these patients with juvenile glaucoma are often without symptoms despite the increased IOPs. Signs and symptoms of congenital glaucoma such as epiphora, photophobia, blepharospasm, Haab's stria, corneal clouding and increasing corneal diameter are often not seen with juvenile glaucoma. Hence, in a child whose sclera is still vulnerable to the effects of elevated IOP, proxies of persistent elevated IOP such as enlarging corneal diameter, increasing axial length and progressive myopia also need to be taken into consideration and assessed regularly. Therefore, a marked change or significant inter-eye difference 206 Tan et al in refractive error may be an indicator of juvenile glaucoma, which should prompt us to perform meticulous examination of the optic nerve head and IOPs. With the advent of iCare rebound tonometry and its greater tolerability in children, there is now increased success of obtaining an IOP measurement in the pediatric population. Anisometropia is not uncommonly seen in the pediatric population and not necessarily attributable to glaucoma. 22 However, it is noteworthy that 31% of a sample of patients with primary congenital glaucoma showed at least 2.0 D of anisometropia and that 100% of patients have at least this amount of anisometropia if unilateral primary congenital glaucoma is diagnosed. 1,23 In our presented case, the patient was hyperopic with 4.0 D difference between each eye; although the anisometropia may have been contributed in part by the left eye not emmetropizing as normally expected in our patient's age group, we believe that glaucomatous enlargement of the right eye resulted in significant loss of hyperopia in that eye and in turn contributed to anisohypermetropic amblyopia in the left eye. The diagnostic limitation of our case was that gonioscopy was not performed and diagnosis of JOAG was based on the preponderance of evidence. The level of IOP with possible optic nerve head changes suggestive of glaucoma indicated immediate treatment, and the IOP fell within normal limits once glaucoma-lowering drops were started. Subsequent optic nerve heading imagings were normal. There are little data available on the therapeutic options of JOAG, possibly due to the rarity of the condition. Some studies suggested that juvenile glaucoma may need primary surgical treatment. 24,25 However, since JOAG has also been postulated to be a subset of adult POAG with an earlier age of onset, it may be possible to give these patients a trial of medical therapy. 26 Gupta et al 20 found in their cohort of high-pressure JOAG patients that medical therapy alone could control IOP and prevent glaucomatous progression in 52% of their patients over a 5-year follow-up. Success of medical therapy could be related to the severity of angle dysgenesis. Those presenting at younger age may have greater trabeculodysgenesis with more severe disease that requires early surgery compared to those who present later. 9 Filtering surgery in the juvenile age group are known to have lower success rates than among adults. 27 Trabeculectomy without mitomycin in the juvenile age group has been reported to have a success rate of 68% over a 3-year follow-up. 28 Trabeculectomy with mitomycin improves surgical success among JOAG but has higher risk of postoperative hypotonic maculopathy and long-term bleb-related infections. 28,29 Post-trabeculetomy cataract formation in juvenile glaucoma is also known to occur with the same frequency as in adult glaucomas. 30 Given the longer life expectancy of these patients, decisions for undertaking surgery in these young patients will have to be taken with caution. This is an unusual case as the eye with abnormal vision from amblyopia, though by itself requiring treatment, was a red herring for a potentially blinding condition in the fellow eye with normal vision on examination. All patients must be thoroughly examined when asymmetry is detected between each eye instead of assuming the common diagnosis of anisohypermetropic amblyopia. The importance of detailed fundal examination even in the presence of good central vision cannot be further emphasized. To the best of our knowledge, this is the first reported case of juvenile onset glaucoma presenting with anisohypermetropic amblyopia in one eye and normal vision in the fellow eye that has glaucoma. Conclusion IOP measurement and optic disc appearance are fundamental features of the examination of a child with glaucoma. However, rapid changes in refractive status and axial length are helpful in both diagnosing and monitoring of childhood glaucoma, while sclera remains vulnerable to the effects of elevated IOP. Development of an excessive loss of hyperopia in our pediatric patients is a useful sign for identifying glaucoma suspects. Measures to ensure prompt and adequate evaluation are important to confirm the diagnosis of glaucoma and for early treatment to minimize the degree of visual impairment. Disclosure The authors report no conflicts of interest in this work.
2018-04-03T04:38:49.313Z
2017-06-15T00:00:00.000
{ "year": 2017, "sha1": "e8ca3f102a62ebbbee04bf707a784d8eae5e9dab", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=36979", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "02a4a9d7a2d68a809537fd7f936f68d9791de4ee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251648029
pes2o/s2orc
v3-fos-license
The Study on Characteristics of Precipitation and Its Return Period Calculation in Wuhan in Recent 30 Years Based on Data Analysis . Meteorological big data has a wide geographical range, large space-time density, many data types and strong timeliness. It has become the requirement of the development of meteorological industry to quickly extract huge information and knowledge from big data to solve the problem of weather prediction. Based on the measured data of maximum one-hour rainfall from six representative rainfall stations in Wuhan from 1992 to 2021, the variation law, characteristic analysis and return period of different aging precipitation in Wuhan city in the past 30 years are analyzed. The results show that although the number of days of precipitation in Wuhan has increased year by year in the past 30 years, the days of rainstorm and more than rainstorm generally show a decreasing trend. The precipitation is mainly concentrated in spring (March-May) and summer (June-August), with the most in summer. The monthly precipitation is mainly concentrated in July, with the least precipitation in December. The maximum daily precipitation is between 55.1~285.7mm, and the average maximum hourly precipitation of 122.4mm reaches 98.6mm. At last, based on the parameters of GEV distribution, the maximum hourly precipitation, maximum 3h, 6h, 12h, 24h precipitation , continuous hourly precipitation, maximum daily precipitation and maximum continuous daily precipitation are fitted, and the values of different recurrence periods are estimated. Introduction Urbanization is an important symbol of national modernization. According to the data of the seventh national census, the resident population in cities and towns in China is 901.99 million, accounting for 63.89% of the total population. Under the background of the accelerating process of urbanization, the hydrological characteristics of urban areas have changed greatly under the influence of a series of factors, such as the continuous expansion of urban construction areas, the increase of impervious area, the construction of drainage system and so on [1][2][3]. With the continuous improvement of urban population and asset density, the disaster losses caused by flood and waterlogging disasters in cities of the same scale are obviously increased [4][5]. According to the statistics of China Flood and drought disaster Bulletin, during the period from 2006 to 2016, floods occurred in more than 160 cities across the country, with an average annual direct economic loss of more than 200 billion Yuan RMB. The degree of dependence of cities on lifeline system is gradually increasing, and the impact of flood disasters is obviously beyond the scope of flooding [6][7][8]. Prevention of urban flood disasters has become an important work to ensure the safety of lives and property of urban residents [9][10]. In view of the new characteristics of urban flood disasters, in order to further enhance the ability of urban flood prevention and control and reduce flood risk, Wuhan actively establishes a comprehensive and systematic flood control engineering system and promotes the defense standards of flood control projects. In order to effectively reduce the loss of flood disaster in the flood control standard. However, at present, there are still a series of problems in how to systematically manage and prevent and control urban flood disasters under complex natural and cultural conditions. For this reason, the Yangtze River Survey, Planning and Design Research Co., Ltd. have carried out research on the key technologies to improve the waterlogging prevention capacity of urban agglomeration. In view of the fact that urban waterlogging is closely related to precipitation, if rainfall of 80mm to 100mm occurs in some urban areas in a short period of time, there will be many local floods. The Yangtze River Survey, Planning and Design Research Co., Ltd. specially entrusts Wuhan rainstorm Research Institute of China Meteorological Administration to carry out the analysis of urban rainfall law in Wuhan based on the precipitation data of Wuhan in the past 30 years, in order to analyze the environmental causes of frequent urban floods in Wuhan. In order to formulate more scientific urban flood prevention measures. Analysis of the Characteristics of Annual Precipitation According to the statistics of Wuhan National basic Weather Station, from 1992 to 2021, the annual precipitation in Wuhan is increasing year by year, the average annual precipitation is 1290.4mm, the maximum annual precipitation is in 2020, the annual precipitation is 2012.4mm, and the minimum annual precipitation is 899. Analysis of Seasonal Precipitation Characteristics The precipitation in Wuhan is mainly concentrated in spring (March-May) and summer (June-August) as shown in Figure2, with the most in summer, the average precipitation in spring is 345.2mm, the average precipitation in summer is 420.6mm, the average precipitation in autumn is 224.1mm, and the average precipitation in winter is 151.9mm. During the 30 years from 1992 to 2021, the maximum precipitation in spring appears in 2002, which is 653.8mm, and the minimum precipitation appears in 2011, which is 145.1mm; the maximum precipitation in summer appears in 2016, which is 1200.2mm, and the minimum precipitation appears in 2001, which is 214.4mm; the maximum precipitation in autumn appears in 2020, which is 424.4mm, and the minimum precipitation appears in 2007, which is 81.1mm. The maximum precipitation in winter is 220.1mm in 2020, and the minimum precipitation is 34.3mm in 1999. Analysis of Monthly Precipitation Characteristics The monthly precipitation in Wuhan is mainly concentrated in July as shown in Figure3, with an average annual precipitation of 235.6mm in July and the least in December, which is 29.4mm. The maximum value of multi-year monthly average precipitation is 758.4mm in July 1998, and the multi-year monthly minimum precipitation appeared in December 1999, which is 0mm (See Table 1 for details). Analysis of Daily Precipitation Characteristics According to the precipitation data of Wuhan National basic Meteorological Station, during the 30 years from 1992 to 2021, the maximum daily precipitation in Wuhan is between 55.1~285.7mm, with an average of 122.4mm, among which the maximum daily precipitation in 1998 is the largest, reaching 285.7mm, and the minimum in 2017, which is 55.1mm. The maximum continuous daily precipitation in Wuhan is between 93.2~582.5mm, with an average of 218.93mm, in which the maximum sustained daily precipitation in 2016 is the most and that in 2018 is the least. The longest continuous precipitation days in Wuhan are 16 days (1992.03.13-1992.03.28) in 1992, and the process precipitation is 174.1mm. In 1995, the longest continuous precipitation days are only 4 days (1995.02.10-995.02.13), and the process rainfall is 32.1mm. In the past 30 years, the process rainfall of the longest continuous precipitation in 2016 is the largest, reaching 582.5mm, and the process rainfall of the longest continuous precipitation in 2011 is the smallest, which is 9.7mm. Analysis of the Characteristics of Short-Duration Precipitation Affected by the effect of urbanization, the occurrence probability and rainfall intensity of high-intensity local torrential rain in urban central area are greatly increased, which increases the natural risk of flooding in the city. In recent years, extreme rainstorm events occur frequently in Wuhan, and the intensity of short-duration precipitation directly affects the degree of rainstorm waterlogging disaster. In order to better carry Pre(mm) Months out the prevention and control of rainstorm and waterlogging disasters in Wuhan, according to the hourly precipitation data of the national basic weather station in Wuhan, the maximum hourly precipitation, maximum 3 h precipitation, maximum 6 h precipitation, maximum 12 h precipitation, maximum 24 h precipitation and maximum continuous hour precipitation in Wuhan are statistically analyzed. The results of statistical analysis show that from 1992 to 2021, the maximum hourly precipitation range of Wuhan City is 22.1mm-98.6mm, the maximum hourly precipitation is 98.6mm, the maximum continuous 3-hour precipitation range is 38.3-158.6mm, the maximum 3-hour cumulative precipitation is 158.6mm, the maximum 6-hour precipitation range is 46-221.2mm, and the maximum 6-hour cumulative precipitation is 221.2mm. The maximum 12-hour precipitation range is 62.4-276.7mm, the maximum 12-hour cumulative rainfall is 276.7 mm, the maximum 24-hour precipitation range is 70-293.3mm, the maximum 24-hour precipitation range is 293.3 mm, and the maximum continuous hourly precipitation range in Wuhan is 57.1-280mm. Analysis of Return Period of Precipitation in Different Ages The problem of extreme precipitation belongs to small probability events and belongs to the category of extreme value theory. In recent years, a variety of extreme value distribution models have been used to study extreme precipitation, flood and river runoff, and a variety of return period estimation methods have been derived. In general, the random variable X obeys the normal distribution. The extreme value is the maximum or minimum value selected from the random sequence, using ‫ܯ‬ = max൫‫ݔ‬ ଵ ‫ݔ,‬ ଶ , … ‫ݔ,‬ ൯,‫ܯ‬ = min൫‫ݔ‬ ଵ ‫ݔ,‬ ଶ , … ‫ݔ,‬ ൯.The maximum and minimum values of n random variables are represented respectively, and their probability distribution characteristics can be fitted by the generalized extreme value model (GEV). There are extreme values in different aging precipitation, so they can also be fitted by GEV model. The distribution function of GEV model can be expressed as follows: Where ε, μ and σ are shape parameters, position parameters and scale parameters, respectively. When the shape parameters take different values, three different extremely behaviors are corresponding. The parameters of GEV distribution can be estimated by moment method, maximum likelihood method, Gumbel method, probability weighting method and so on. Because the maximum likelihood method is easy to adapt to complex models and the effect of parameter estimation is quite accurate, this paper uses the maximum likelihood method to estimate the model parameters. After the parameters are determined, given the return period T, the annual extreme precipitation of T is In the formula: R T is the maximum hourly precipitation corresponding to the T-year recurrence period, and p is the probability corresponding to the return period. In order to better understand the influence of short-duration heavy precipitation on waterlogging in Wuhan, based on the statistical analysis results of the previous conclusion, according to Gumbel extreme value I distribution method, the maximum hourly precipitation, maximum 3h precipitation, maximum 6h precipitation, maximum 12h precipitation, maximum 24h precipitation, maximum continuous hourly precipitation, maximum daily precipitation and maximum continuous daily precipitation are fitted, and the values of different recurrence periods are estimated. Table 2 and figure 4(a)-4(f) show the maximum hourly precipitation, maximum 3h precipitation, maximum 6h precipitation, maximum 12h precipitation, maximum 24h precipitation, maximum continuous hourly precipitation Conclusion Based on the measured data of maximum one-hour rainfall from six representative rainfall stations in Wuhan from 1992 to 2021, the variation law, characteristic analysis and return period of different aging precipitation in Wuhan city in the past 30 years are analyzed. The results show that: 1) The average annual precipitation in Wuhan is 1290.4mm, and the maximum annual precipitation occurs in 2020. Although the annual precipitation days are increasing year by year, the annual rainstorm and the number of precipitation days above the rainstorm show a decreasing trend. 2) The precipitation in Wuhan is mainly concentrated in spring (March-May) and summer (June-August), with the most in summer, the average precipitation in spring is 345.2mm, the average precipitation in summer is 420.6mm, the average precipitation in autumn is 224.1mm, and the average precipitation in winter is 151.9mm. 3) The monthly precipitation in Wuhan is mainly concentrated in July, and the average annual precipitation in July is 235.6mm. The lowest precipitation is 29.4mm in December. The maximum monthly precipitation of many years appeared in July 1998, reaching 758.4mm. 4) Based on the parameters of GEV distribution, the maximum hourly precipitation, maximum 3h precipitation, maximum 6h precipitation, maximum 12h precipitation, maximum 24h precipitation, maximum continuous hourly precipitation, maximum daily precipitation and maximum continuous daily precipitation are fitted, and the values of different recurrence periods are estimated.
2022-08-19T13:07:43.300Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "da0b68c786f121fb6c3ebc84a7290f4a59b09aea", "oa_license": "CCBYNC", "oa_url": "https://ebooks.iospress.nl/pdf/doi/10.3233/FAIA220124", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "57ba4947baa206da2d2196b7995c33f19580dc6d", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
4235119
pes2o/s2orc
v3-fos-license
Missing data in trial‐based cost‐effectiveness analysis: An incomplete journey SUMMARY Cost‐effectiveness analyses (CEA) conducted alongside randomised trials provide key evidence for informing healthcare decision making, but missing data pose substantive challenges. Recently, there have been a number of developments in methods and guidelines addressing missing data in trials. However, it is unclear whether these developments have permeated CEA practice. This paper critically reviews the extent of and methods used to address missing data in recently published trial‐based CEA. Issues of the Health Technology Assessment journal from 2013 to 2015 were searched. Fifty‐two eligible studies were identified. Missing data were very common; the median proportion of trial participants with complete cost‐effectiveness data was 63% (interquartile range: 47%–81%). The most common approach for the primary analysis was to restrict analysis to those with complete data (43%), followed by multiple imputation (30%). Half of the studies conducted some sort of sensitivity analyses, but only 2 (4%) considered possible departures from the missing‐at‐random assumption. Further improvements are needed to address missing data in cost‐effectiveness analyses conducted alongside randomised trials. These should focus on limiting the extent of missing data, choosing an appropriate method for the primary analysis that is valid under contextually plausible assumptions, and conducting sensitivity analyses to departures from the missing‐at‐random assumption. | INTRODUCTION Cost-effectiveness analyses (CEA) conducted alongside randomised controlled trials are an important source of information for health commissioners and decision makers. However, clinical trials rarely succeed in collecting all the intended information (Bell, Fiero, Horton, & Hsu, 2014), and inappropriate handling of the resulting missing data can lead to misleading inferences (Little et al., 2012). This issue is particularly pronounced in CEA because these usually rely on collecting rich, longitudinal information from participants, such as their use of healthcare services (e.g., Client Service Receipt Inventory; Beecham & Knapp, 2001) and their health-related quality of life (e.g., EQ-5D-3L; Brooks, 1996). Several guidelines have been published in recent years on the issue of missing data in clinical trials (National Research Council, 2010; Committee for Medicinal Products for Human Use (CHMP), 2011;Burzykowski et al., 2010;Carpenter & Kenward, 2007) and for CEA in particular (Briggs, Clark, Wolstenholme, & Clarke, 2003;Burton, Billingham, & Bryan, 2007;Faria, Gomes, Epstein, & White, 2014;Manca & Palmer, 2005;Marshall, Billingham, & Bryan, 2009). Key recommendations include: • taking practical steps to limit the number of missing observations; • avoiding methods whose validity rests on contextually implausible assumptions, and using methods that incorporate all available information under reasonable assumptions; and • assessing the sensitivity of the results to departures from these assumptions. In particular, following Rubin's taxonomy of missing data mechanisms (Little & Rubin, 2002), methods valid under a missing-at-random (MAR) assumption (i.e., when, given the observed data, missingness does not depend on the unseen values) appear more plausible than the more restrictive assumption of missing completely at random, where missingness is assumed to be entirely independent of the variables of interest. Because we cannot exclude the possibility that the missingness may depend on unobserved values (missing not at random [MNAR]), an assessment of the robustness of the conclusions to alternative missing data assumptions should also be undertaken. Noble and colleagues (Noble, Hollingworth, & Tilling, 2012) have previously reviewed how missing resource use data were addressed in trial-based CEA. They found that practice fell markedly short of recommendations in several aspects. In particular, that reporting was usually poor and that complete-case analysis was the most common approach. However, missing data research is a rapidly evolving area, and several of the key guidelines were published after that review. We therefore aimed to review how missing cost-effectiveness data were addressed in recent trial-based CEA. We reviewed studies published in the National Institute for Health Research Health Technology Assessment (HTA) journal, as it provides an ideal source for assessing whether recommendations have permeated CEA practice. These reports give substantially more information than a typical medical journal article, allowing authors the space to clearly describe the issues raised by missing data in their study and the methods they used to address these. Our primary objectives were to determine the extent of missing data, how these were addressed in the analysis, and whether sensitivity analyses to different missing data assumptions were performed. We also provide a critical review of our findings and recommendations to improve practice. | METHODS The PubMed database was used to identify all trial-based CEA published in HTA between the January 1, 2013, and December 31, 2015. We combined search terms such as "randomised," "trial," "cost," or "economic" to capture relevant articles (see Appendix A.1 for details of the search strategy). The full reports of these articles were downloaded then screened for eligibility by excluding all studies that were pilot or feasibility studies; reported costs and effects separately (e.g., cost-consequence analysis); or did not report a within-trial CEA. For each included study, we extracted key information about the study and the analysis to answer our primary research questions. A detailed definition of each indicator extracted is provided in Appendix B. In a second stage, we drew on published guidelines and our experience to derive a list of recommendations to address missing data, and then rereviewed the studies to assess to which extent they followed these recommendations (see Appendix B for further details). Data analysis was conducted with Stata version 15 (StataCorp, 2017). The data from this review are available on request (Leurent, Gomes, & Carpenter, 2017). | Included studies Sixty-five articles were identified in our search (Figure 1), and 52 eligible studies were included in the review (listed in Appendix A.2). The median time frame for the CEA was over 12 months, and the majority of trials (71%, n = 37) conducted a follow-up with repeated assessments over time (median of 2; Table 1). The most common effectiveness measure was the quality-adjusted life year (81%, n = 42). Other outcomes included score on clinical measures, or dichotomous outcomes such as "smoking status". | Extent of missing data Missing data was an issue in almost all studies, with only five studies (10%) having less than 5% of participants with missing data. The median proportion of complete cases was 63% (interquartile range, 47-81%; Figure 2). Missing data arose mostly from patient-reported (e.g., resource use and quality of life) questionnaires. The extent of missing data was generally similar for cost and effectiveness data, but 10 (19%) studies had more missing data in the latter (Table 1). The proportion of complete cases reduced, as the number of follow-up assessments increased (Spearman's rank correlation coefficient ρ = −0.59, p value < .001) and as the study duration increased (ρ = −0.29, p = .04). | Approach to missing data In the remaining assessments, we excluded the five studies with over 95% of complete cases. Three main approaches to missing data were used: complete-case analysis (CCA;Faria et al., 2014), reported in 66% of studies (n = 31), multiple imputation (MI; Rubin, 1987; 49%, n = 23), and ad hoc hybrid methods (17%, n = 8). For the primary analysis, CCA was the most commonly used method (43%, n = 20), followed by MI (30%, n = 14; Table 2). MI was more common when the proportion of missing data was high and when there were multiple follow-up assessments (see Table 3). | Sensitivity analyses Over half of the studies (53%, n = 25) did not conduct any sensitivity analysis around missing data, with 21% (n = 10) reporting CCA results alone and 11% (n = 5) MI results under MAR alone (Table 4). The remaining studies (n = 22, 47%) assessed the sensitivity of their primary analysis results to other approaches for the missing data. This was usually performing either MI under MAR, or CCA, when the other approach was used in the primary analysis. Other sensitivity analyses included using last observation carried forward or regression imputation. Only two studies (4%) conducted sensitivity analyses, assuming data could be MNAR. In both studies, values imputed under a standard MI were modified to incorporate possible departures from the MAR assumption for both the cost and effectiveness data using a simplified pattern-mixture model approach (Faria et al., 2014;Leurent et al., 2018). The studies then discussed the plausibility of these departures from MAR and their implications for the costeffectiveness inferences. Table 5 reports the number of studies that reported evidence of following the recommendations from Figure 3 (see Section 4). Most studies reported being aware of the risk of missing data, for example, by taking active steps to reduce them (n = 35, 74%). In addition, almost two-thirds of the studies (n = 29, 62%) reported the breakdown of missing data by arm, time point, and endpoint. Only about one-third of the studies have clearly reported the reasons for the missing For the five studies with less than 5% of incomplete cases, four used CCA and one an ad hoc hybrid method for their primary analysis. One of the five studies conducted a sensitivity analysis to missing data. c Excluding 12 studies where this was unclear (n = 35). Note. % = row percentages; CCA = complete-case analysis; MAR = assuming data missing at random; MI = multiple imputation; MNAR = assuming data missing not at random. Total may be more than 100% as some studies conducted more than one sensitivity analysis. a Other methods used for sensitivity analysis include last observation carried forward (n = 1), regression imputation (n = 1), adjusting for baseline predictors of missingness (n = 1), imputing by average of observed values for that patient (n = 1), and an ad hoc hybrid method using multiple and mean imputation (n = 1). data (n = 16, 34%) and the approach used for handling the missing data and its underlying assumptions (n = 17, 36%). Only one study (2%) appropriately discussed the implications of missing data in their cost-effectiveness conclusions. | Summary of findings Missing data remain ubiquitous in trial-based CEA. The median proportion of participants with complete cost-effectiveness data was only 63%. This reflects the typical challenges faced by CEA of randomised controlled trials, which often rely on patient questionnaires to collect key resource use and health outcome data. Despite best efforts to ensure completeness, a significant proportion of nonresponse is likely. This is consistent with other reviews, which also found no reduction of the extent of missing data in trials over time (Bell et al., 2014). CCA remains the most commonly used approach for handling missing data in trial-based CEA, in contrast to recommendations. This approach makes the restrictive assumption that, given the variables in the analysis model, the distributions of the outcome data are the same, whether or not those outcome data are observed. This approach is also problematic because it can result in a loss in precision, as it discards participants who have partially complete data postrandomisation and who can provide important information to the analysis. Other unsatisfactory approaches based on unrealistic assumptions, such as last observation carried forward and single imputation, are also occasionally used. MI (Rubin, 1987) assuming MAR has been widely recommended for CEA (Briggs et al., 2003;Burton et al., 2007;Faria et al., 2014;Marshall et al., 2009), allowing for baseline variables and postrandomisation data not in the primary analysis to be used for the imputation. It seems to be now more commonly used, with around half of the studies using MI for at least one of their analyses (up to 74% in 2015). Around one-third of the studies used MI for their primary CEA, which is higher than seen in primary clinical outcome analyses (8%; Bell et al., 2014). On the other hand, sensitivity analyses to missing data remain clearly insufficient. Only two studies (4%) conducted comprehensive sensitivity analyses and assessed whether the study's conclusions were sensitive to departures from the MAR assumption (i.e., possible MNAR mechanisms). Half of the studies did not conduct any sensitivity analysis regarding the missing data. The remaining studies performed some sort of sensitivity analyses, but usually consisting of simple variations from the primary analysis, such as reporting CCA results in addition to MI. This may be more for completeness than proper missing data sensitivity analyses. For example, if MI is used for the primary analysis (having assumed that MAR is the realistic primary missing data assumption), a sensitivity analysis that involves CCA will make stronger missing data assumptions. | Strengths and limitations Our review follows naturally from the review of Noble et al. (2012) and gives an update of the state of play after the publication of several key guidelines. Our review, however, differs in scope and methods and cannot be directly compared with the results of Noble et al. One of the key strengths of this review is that HTA comprehensive reports allowed us to obtain a more complete picture of the missing data and the methods used to tackle it. HTA monographs are published alongside more succinct peer-reviewed papers in specialist medical journals, and they are often seen as the "gold-standard" for trial-based CEA in the UK. It seems therefore reasonable to assume that these are representative of typical practice in CEA. This review is, to our knowledge, the first to look at completeness of both cost and effectiveness data. A limitation is the use of a single-indicator "proportion of complete cases" to capture the extent of the missing data issue. This is however a clearly defined indicator and allows comparison with other reviews. The "recommendations indicators" also focused on the information reported in the study, not necessarily what might have been done in practice. | Recommendations A list of recommendations to address missing data in trial-based CEA is presented in Figure 3. Trial-based CEA are prone to missing data, and it is important that analysts take active steps at the design and data-collection stages to limit their extent (Bernhard et al., 2006;Brueton et al., 2013;National Research Council, 2010). Resource use questionnaires should be designed in a user-friendly way, and their completion encouraged during follow-up visits, possibly supported by a researcher (Mercieca-Bebber et al., 2016;National Research Council, 2010). Alternative sources should also be considered to minimise missing information, for example, administrative data or electronic health records (Franklin & Thorn, 2018;Noble et al., 2012). For any study with missing data, clear reporting of the issue is required. Ideally, the study should report details of the pattern of missing data (Faria et al., 2014), possibly as an appendix. At a minimum, CEA studies should report for each analysis the number of participants included by trial arm, as recommended in the Consolidated Standards of Reporting Trials guidelines (Noble et al., 2012;Schulz et al., 2010). Although CCA may be justifiable in some circumstances, the choice of CCA for the primary analysis approach appears difficult to justify in the presence of repeated measurements, because the loss of power (by discarding all patients with any missing values) across the different time points tends to be large. Other approaches valid under more plausible MAR assumptions and making use of all the observed data, such as MI (Rubin, 1987); likelihood-based repeated measures models (Faria et al., 2014;Verbeke, Fieuws, Molenberghs, & Davidian, 2014); or Bayesian models (Ades et al., 2006), should be considered. In particular, MI has been increasingly used in CEA, and further guidance to support an appropriate use in this context is warranted. An area with clear room for improvement is the conduct of sensitivity analyses. This review found that many studies used CCA for the primary analysis and MI as a sensitivity analysis, or vice-versa, and concluded that the results were robust to missing data. This is misleading because both of these methods rely on the assumption that the missingness is independent of the unobserved data. Although the MAR assumption provides a sensible starting point, it is not possible to determine the true missing-data mechanism from the observed data. Studies should therefore assess whether their conclusions are sensitive to possible departures from that assumption (National Research Council, 2010; Committee for Medicinal Products for Human Use (CHMP), 2011;Faria et al., 2014). Several approaches have been suggested to conduct analyses under MNAR assumptions. Selection models express how the probability of being missing is related to the value itself. Pattern-mixture models, on the other hand, capture how missing data could differ from the observed (Molenberghs et al., 2014;Ratitch, O'Kelly, & Tosiello, 2013). Pattern-mixture models appear attractive because they frame the departure from MAR in a way that can be more readily understood by clinical experts and decision makers and can be used with standard analysis methods such as MI (Carpenter & Kenward, 2012;Ratitch et al., 2013). MNAR modelling can be challenging, but accessible approaches have also been proposed (Faria et al., 2014;Leurent et al., 2018). Further developments are still needed to use these methods in the CEA context and to provide the analytical tools and practical guidance to implement them in practice. | CONCLUSION Missing data can be an important source of bias and uncertainty, and it is imperative that this issue is appropriately recognised and addressed to help ensure that CEA studies provide sound evidence for healthcare decision making. Over the last decade, there have been some welcome improvements in handling missing data in trial-based CEA. In particular, more attention has been devoted to assessing the reasons for the missing data and adopting methods (e.g., MI) that can incorporate those in the analysis. However, there is substantial room for improvement. Firstly, more efforts are needed to reduce missing data. Secondly, the extent and patterns of missing data should be more clearly reported. Thirdly, the primary analysis should consider methods that make contextually plausible assumptions rather than resort automatically to CCA. Lastly, sensitivity analyses to assess the robustness of the study's results to potential MNAR mechanisms should be conducted. CONFLICT OF INTEREST The authors have no conflict of interest. A.1 | PubMed search criteria and results Search Wiles, N., Thomas, L., Abel, A., Barnes, M., Carroll, F., Ridgway, N., … Lewis, G. (2014). Clinical effectiveness and cost-effectiveness of cognitive behavioural therapy as an adjunct to pharmacotherapy for treatment-resistant depression in primary care: The CoBalT randomised controlled trial. Health Technology Assessment, 18 (31) Because these aspects could have been mentioned in multiple parts in the monograph, we used a systematic approach, looking for keywords and checking the most relevant paragraphs in the full report. B.2.2 | Answers "Yes": The recommendation was clearly mentioned, and the criteria therefore met. "No": The recommendation was not clearly mentioned or found. The recommendation may still have been followed but not reported (or at least not found with the above strategy). "Unclear": There was some suggestions the criteria may have been met but not enough information to be sure. Comment on why missing data (e.g., "because patients were too ill"). Or explore baseline factors associated with missingness No mention of reasons for MD in the CE section. Have to be specific to the CE missing data, or clearly mentioning something like "reasons for MD are discussed in clinical analysis section …" D3. Describe methods used, and underlying missing data assumptions Clearly state the method used to address missing data, AND the underlying assumption. No report of missing data assumption or method used Draw overall conclusion in light of the different results and the plausibility of the respective assumptions Conduct sensitivity analyses, and interpret results appropriately. Did MNAR SA and appropriate conclusion. -Did not conduct sensitivity analyses -Conducted sensitivity analyses, but no comment/conclusion -Did MI and CC and only say "results did not change/robust to missing data"
2018-04-03T03:12:06.853Z
2018-03-24T00:00:00.000
{ "year": 2018, "sha1": "5962cedd44bbcef6b749e111e8f5bb3834afe6ac", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hec.3654", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "59f293ef013a45e6f44a94db23079d3a9db68846", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
16624142
pes2o/s2orc
v3-fos-license
Bivariate Rainfall and Runoff Analysis Using Entropy and Copula Theories Multivariate hydrologic frequency analysis has been widely studied using: (1) commonly known joint distributions or copula functions with the assumption of univariate variables being independently identically distributed (I.I.D.) random variables; or (2) directly applying the entropy theory-based framework. However, for the I.I.D. univariate random variable assumption, the univariate variable may be considered as independently distributed, but it may not be identically distributed; and secondly, the commonly applied Pearson’s coefficient of correlation (g) is not able to capture the nonlinear dependence structure that usually exists. Thus, this study attempts to combine the copula theory with the entropy theory for bivariate rainfall and runoff analysis. The entropy theory is applied to derive the univariate rainfall and runoff distributions. It permits the incorporation of given or known information, codified in the form of constraints and results in a universal solution of univariate probability distributions. The copula theory is applied to determine the joint rainfall-runoff distribution. Application of the copula theory results in: (i) the detection of the nonlinear dependence between the correlated random variables-rainfall and runoff, and (ii) capturing the tail dependence for risk analysis through joint return period and conditional return period of rainfall and runoff. The methodology is validated using annual daily maximum rainfall and the corresponding daily runoff (discharge) data collected from watersheds near Riesel, Texas (small agricultural experimental watersheds) and Cuyahoga River watershed, Ohio. In the above three types of applications, use of the copula theory separates approach II from approaches I and III with the capability of capturing the nonlinear dependence structure of studied variables, whereas the application of Pearson's linear covariance in approaches I and III is not sensitive to the nonlinear dependence structure.The advantage of approach III is that by applying the maximum entropy theory, one may reach the universal solution and better capture the shape of probability density function (PDF) [30][31][32][33][34][35][36].Considering approaches I and II, there exists one common assumption, i.e., the univariate hydrological variables are considered as independently identically distributed (I.I.D.) random variables.Although depending on how the data is collected, it may be valid to assume it as independently distributed random variables, the assumption of the variable being identically distributed may not be valid for the unviariate data with a mixed structure.The misidentification of univariate probability distribution may result in underestimation/overestimation of the joint and conditional return period in case of risk analysis.In addition, even if the I.I.D. random variable assumption is valid, the univariate distribution determined is usually not universal for the same datasets.Thus, it is important to re-evaluate the determination of univariate distributions. With the limitations of each approach discussed above, this study attempts to utilize the advantages held by approaches II and III and aims to provide a framework to link the maximum entropy and copula theories for the study of multivariate hydrological frequency analysis to avoid misusing the assumptions.Comparing to the existing frameworks, the proposed framework has the following advantages: (i) the universal probability distribution can be obtained from appropriately defined constraints; (ii) the multi-mode can be captured using the maximum entropy theory if the data show the multi-mode structure which may result in better estimation of multivariate/conditional return periods of given events; and (iii) the nonlinear dependence can be captured among the correlated random variables by applying the copula theory rather than applying the known or entropy-based multivariate probability distribution with the dependence captured by linear covariance.For illustration, the paper applies rainfall and runoff (discharge) data from: (1) watersheds near Riesel, Texas (the agricultural experimental watersheds maintained by the USA Department of Agriculture, Agricultural Research Service), and (2) the Cuyahoga River watershed in Ohio, collected by USGS and NOAA.The paper is organized as follows: after introducing the subject in this section, univariate rainfall and runoff frequency distributions are derived using the entropy theory in Section 2. Section 3 discusses the joint probability distribution estimation using copula theory, tail dependence for extreme events and corresponding joint and conditional return period analysis.Section 4 discusses the goodness of fit statistics, and application of the methodology is presented in Section 5.The paper is concluded in Section 6. Determination of Maximum Entropy-Based Univariate Distributions Derivation of univariate distributions of rainfall and runoff using the entropy theory entails: (1) defining entropy and specifying the known information about the random variables in terms of constraints, and (2), maximizing entropy to obtain the probability density function using the method of Lagrange multipliers and determining these multipliers. Entropy and Specification of Constraints For a univariate random variable X with a continuous probability density function f X (x), the Shannon entropy [37], H(X) can be expressed as: In accordance with the principle of maximum entropy (POME) [38,39], one can obtain the most probable probability density function (PDF) for random variable X with the available information (i.e., constraints) by maximizing Equation (1).In this study, the sample statistical moments are used as constraints with two main advantages.First, it avoids assuming certain types of distributions from data based on a nonparametric approach (frequency histogram or kernel density function), and hence one may reach the universal PDF for the dataset analyzed.Second, the PDF so derived may capture the possible multi-modes embedded in the data. It is well known that annual maximum daily rainfall amount and corresponding daily discharge are skewed to the right.Thus, at least the first three non-central sample statistical moments need to be considered as constraints.According to the probability theory, it is also known that if the excess kurtosis is significantly different from 0, the probability density function of the random variable is heavily tailed and results in the necessity to include the fourth non-central statistical moment as a constraint.This necessity is determined based on the excess kurtosis as follows: In Equations ( 2), stands for the excess kurtosis and G 2 stands for the sample excess kurtosis.Then, whether G 2 is significantly different from zero can be determined by statistic (T) as: (3) where SEK stands for the standard error of kurtosis as: In Equations (2,3), n is the sample size.For statistics T: if |T| > 2, the excess kurtosis is significantly different from zero and the fourth non-central moment needs to be applied as a constraint, otherwise, the fourth non-central moments does not need to be applied.In addition, considering the rainfall and runoff data structure, the first moment in the logarithm domain may also contribute to the PDF.Hence, the constraints for the maximum entropy-based distributions are: if excess kurtosis is not significantly different from zero: Entropy and Specification of Constraints With the constraints defined in Equations (4-6), the entropy function [Equation (1)] is maximized using the method of Lagrange multipliers with the resulting maximum entropy-based PDF expressed as: where ′s are the Lagrange multipliers.The PDF defined by Equation ( 7) will be able to preserve the most important statistical moments that dominate its shape.Following [40,41], the Lagrange multipliers can be estimated.In what follows, the estimation concept and procedure are described in detail. Substituting Equation (7) into Equation ( 4) one can obtain the partition function as: or: It is proved that λ 0 is a strictly convex function of λ 1 , λ 2 , λ 3 , λ N+1 [41].Thus, one can write the objective function as: , , , ln exp ln (9) where a i stands for the sample statistical moment of the constraint. It should be noted that the objective function Z so defined is a convex function of s, and minimizing the objective function Z will result in the maximum entropy.Now, the Lagrange parameters can be determined using Newton's method as follows: Let: Then the objective function [(Equation (9)] can be approximated with the second-order Taylor series around parameter vector λ = [ λ 1 , λ 2 ,…, λ N+1 ] as: where the elements ( ) of gradient vector G and the element ( , ) of Hessian matrix H can be written as: The Lagrange parameters can then be estimated using Newton's method with the initial parameter set 0, 0, 0, 0 and 0, 0, 0, 0, 0 and the corresponding constraints of gradient vector as G = 0.It is necessary to state that needs to be greater than 0 [42]. Bivariate Rainfall and Runoff Distribution Using Copula Theory Using the copula theory, one may successfully capture the nonlinear dependence between rainfall and runoff (discharge) variables.The copula concept was first introduced by Sklar [43].For a bivariate case, let observations (x 1 , y 1 ), (x 2 , y 2 ),…, (x n , y n ), be drawn from the bivariate population of (X,Y) with the marginal distributions as and .Then, the joint distribution, i.e., H(X,Y) or simply H can be expressed using the copula as: where C is the copula.C is a unique mapping when and are continuous, and captures the dependence between random variable X and Y. In what follows, the topics essential to apply the copula theory for rainfall and runoff analysis are discussed, i.e., dependence measure, choice of copulas, parameter estimation, tail dependence, and joint/conditional return period determination. Dependence Measure for Bivariate Random Variables and Choice of Copulas To apply the copula theory to investigate the bivariate random variables X and Y, the dependence structure can be examined using the rank-based coefficient of correlation, e.g., Kendall's , Spearman's , and Geni's  [44].The rank-based coefficient of correlation is distribution free and sensitive to the nonlinear dependence structure which makes it more robust than the commonly applied Pearson's coefficient of correlation (only sensitive to linear dependence structure).In this study, the rank-based coefficients of correlation (i.e., Kendall's , Spearman's ) were applied to detect the dependence structure of rainfall and runoff variables. It is known that the dependence between rainfall and runoff are usually positive by nature.Thus, the copula models dealing with positive dependence are selected as the candidates to model the joint rainfall and runoff distribution.Appendix I lists the copula functions examined, including one-and two-parameter Archimedean copulas, extreme-value copulas, and Plackett copula. Estimation of Copula Parameters Parameters of a copula model can be estimated using nonparametric estimation through rank-based coefficient of correlation, i.e., Kendall's , Spearman's , and Geni's .The parameters can also be estimated using the maximum likelihood estimation (MLE).In this study, MLE was applied for parameter estimation. Let the empirical probability distributions of rainfall (X) and runoff (discharge) (Y) random variables be and , then for a given copula model candidate , the maximum log-likelihood function may be written as: where θ represents the copula parameter vector, n is the sample size, and , represents the copula density function as: then, the copula parameter was optimized by maximizing the log-likelihood function or minimizing the negative log-likelihood function. Tail Dependence of Copula In rainfall and runoff analysis, one is usually interested in the extreme behavior of the rainfall and runoff (discharge) variables for risk analysis, i.e., , , and the conditional probability, i.e., | and (or) | .However, the best-fitted copula may not be guaranteed to appropriately model the extreme behavior [45].Thus, it is important to study the tail dependence of the bivariate rainfall and runoff data.The tail dependence may be studied either graphically using the Chi-plot [46] or numerically from an empirical copula, a given group of multivariate distributions, and a given group of copula functions [47].In this study, the tail dependence was numerically investigated by nonparametric estimation. Nonparametric estimation was based on the empirical copula with no assumption imposed on either copula or marginal distributions [47].Let (R x , R y ) be the paired rank of the bivariate random sample , , 1, … , , the empirical copula C m is written as: then, the nonparametric upper-tail dependence coefficient may be estimated in three different forms as: where is the sample size; is the chosen threshold for Equations (14a,b); and in Equation (14b) denotes the relationship to the scant of the copula's diagonal. Equation (14a) was first proposed in [48], whereas Equation (14b) first appeared in [49] and it is sensitive when the extreme values are not along the diagonal as SEC stands for.The threshold in Equations (14a,b) can be estimated following the heuristic plateau-finding algorithm discussed in [47].Equation (14c) was first proposed in [50] and may be appropriately applied only under the assumption that the empirical copula function approximates an extreme value (EV) copula. Return Period of Bivariate Variables Using the Copula Theory In rainfall and runoff analysis, the purpose of deriving the joint distribution and study of the tail dependence is to estimate the joint/conditional return period of extreme events.With the upper tail dependence appropriately assessed, the joint and conditional return period of extreme events may be studied. Joint Return Period "AND" Case Using Copula Theory Following [51], the joint return period can be determined with the appropriately selected copula function as follows.Considering the 2-dimensional continuous bivariate random variables , , , , the "AND" case may be determined using Kendall distribution, component-wise and most-likely excess design realizations [51].In this study, the most-likely design realization approach was adopted.For rainfall and runoff variables X and Y, the joint return period is written as: where stands for the critical layer and t stands for the joint return period: , is the joint probability density function derived from copula function as: where stands for the copula density function as Equation (13a); and and stand for the fitted univariate PDF. Then, the design event (x, y) can be estimated by finding the maximum of the joint density function in the logarithm domain over the critical layer with the corresponding (x * , y * ) as the design event with T-year return period.The critical layer can be obtained using the Kendall distribution. Conditional Return Period of Runoff Events Given Rainfall Events Again, using X as rainfall random variable and Y as runoff random variable, the conditional return period of runoff events of given rainfall events can be written in two cases: Case I: Return period of runoff events conditioned on rainfall events greater than the given rainfall values: Applying the copula theory, the exceedance conditional distribution is written as: The corresponding conditional return period is written as: Case II: Return period of runoff events conditioned on rainfall events equal to the given rainfall values: similarly, the exceedance conditional probability is written as: Equation ( 16) can be also rewritten as: The corresponding conditional return period is written as: In Equations (16,17), x * represents the rainfall events; T represents the conditional return period of runoff events; and y * represents the runoff events that need to be estimated based on T and x * .In addition, Equation ( 16) is right tail increasing (RTI) if it is a nondecreasing function of x for all y, and Equation ( 17) or (17a) is stochastic increasing (SI) if it is a nondecreasing function of x for all y. It should also be addressed that 1 in Equations (16a) and (17b) stands for the annual event.If one considers the partial duration time series (i.e., the events over a given threshold), 1 should be replaced with  (the expected number of event/year). Goodness-of-Fit Statistics Before applying the copula-entropy framework to study the bivariate rainfall and runoff frequency and risk analysis, the goodness-of-fit statistic test need to be performed for both fitted univariate distribution and copula functions. Goodness-of-Fit Statistics for Univariate Distribution With the parametric univariate probability distribution fitted to the random variable X, the goodness-of-fit statistical tests need to be performed to assess whether the fitted probability distribution is valid.In the study, three goodness-of-fit statistics were considered. The goodness-of-fit statistics using the root mean square error (RMSE) may be expressed respectively as: where RMSE is root mean square error; is the estimated value from the fitted univariate probability distribution; is the corresponding observed value; and n is the sample size.The Kolmogorov-Smirnov (K-S) goodness-of-fit test is a nonparametric probability distribution free test.For continuous random variables, it quantifies the distance between the empirical distribution (F) and the specified distribution function ( ).The null hypothesis (H 0 ) is: X follows the specified distribution function .The alternative hypothesis (H a ) is: X does not follow the specified distribution function.The K-S goodness-of-fit statistics is defined as: where • : sample data sorted in increasing order. The Anderson-Darling (A-D) goodness-of-fit test is the test to examine whether the sample data is drawn from a specific probability distribution.Comparing with the K-S goodness-of-fit test, the A-D goodness-of-fit test is not distribution free and gives more weight to tails than the K-S goodness-of-fit test [53].The null hypothesis (H 0 ) is: X follows the specified distribution.The alternative (H a ) is: X does not follow the specified distribution.The A-D goodness-of-fit test can be expressed as follows: (20) where is sample size;  is parameter vector of fitted probability distribution; and • is sample data sorted in increasing order.In Equation (20), the null hypothesis (H 0 ) is rejected if . .The .value is approximated using parametric bootstrap simulation for maximum entropy-based univariate distribution. Goodness-of-Fit statistics for Copula The formal goodness-of-fit statistics for multivariate distributions have been extensively discussed based on the copula theory [54,55].Following their discussion, the goodness-of-fit test based on the probability integral transformation (i.e., Kendall's univariate probability transformation) was employed in the study. For a given bivariate probability distribution function using a copula function [Equation ( 12)], the corresponding Kendall's nonparametric univariate probability transformation can be written as: where n is sample size and: The null hypothesis is H 0 : the bivariate random variable can be modeled by a given copula function through the measure of the distance between K n and parametric estimation using: Now the test statistic of rank-based Cramér-von Mises statistics ( ) can be written as: (23) The corresponding P-value of the statistic is then determined using the parametric bootstrap procedure proposed in [14] outlined as follows: (1) Estimate parameter vector for the copula function using MLE with pseudo-observations. (2) Calculate • from Equation ( 21).(3) Determine and • .The Archimedean copula family has the analytical formulation of • , and thus the statistics defined in Equation ( 22) may be calculated directly.Otherwise the Monte Carlo simulation can be applied to approximate • with the following steps:  Generate a random sample , from the fitted copula function with the sample size at least as the same length of the observed data. Calculate the approximated • using an approach similar to Equation ( 21) as:  Calculate the approximated as (25) S n ( K ) (4) Use parametric bootstrap procedure with a large number N to determine the associated P-value as follows:  Generate N bivariate random samples from the fitted copula function of the observed data. Estimate the parameters for the fitted copula functions using the generated bivariate random samples. Calculate , , 1: for each bivariate samples using Equation (21). Repeat step (3) to determine , , , for each sample. Approximate the associated P-Value for the Cramér-von Mises statistic: Data In this study, four watersheds were selected for analysis (two agricultural experimental watersheds in Riesel, Texas, and two watersheds from th Cuyahoga River Watershed, Ohio).Two experimental watersheds are located near Riesel (Waco), Texas, and are maintained by Agricultural Research Service (ARS) of the U.S. department of Agriculture (USDA).In what follows, the procedure for selecting rainfall-runoff events from these watersheds is outlined: (1) Agricultural experimental watershed near Riesel (Waco), Texas: The experimental watersheds near Riesel (Waco) are, W1 and Y2 watersheds [Figure 1(a)] and these were selected based on the watershed area and the length of records maintained.There are multiple raingages in both watersheds, so the Thiessen polygon method was applied to determine daily areal rainfall depth.The Thiessen polygon weights and daily rainfall and corresponding runoff were obtained from the USDA-ARS data warehouse.Furthermore, annual maximum daily rainfall amounts and the resulting daily discharges were applied for rainfall and runoff analysis. (2) Cuyahoga River Watershed, Ohio: The discharge gages at Old Portage (USGS 04206000) and Independence (USGS 04208000) were selected for analysis.The digital terrain model (DTM) flow lines were obtained from USGS.The watersheds contributing to Old Portage and Independence are delineated in the Geographical Information System (GIS), as shown in Figure 1(b).The raingages within the watersheds were identified from the raingage information maintained by National Oceanic and Atmospheric Administration (NOAA).Again, the Thiessen polygon method was applied to determine the daily areal rainfall.The annual maximum daily rainfall amount and the resulting daily discharge were applied for rainfall and runoff analysis.Table 1 lists the pertinent information of the selected watersheds (i.e., drainage area, raingages and length of the record for each watershed).Table 2 lists the Thiessen polygon weight for Old Portage and Independence determined in GIS.This information is further applied to determine the areal rainfall amount at Old Portage and Independence. Entropy-Based Univariate Rainfall and Runoff Distributions As discussed in Section 2, the first moment in the logarithm domain and at least first three non-central moments (Table 3) are needed as constraints to derive the maximum entropy-based univariate distribution for rainfall and runoff random variables with the necessity of fourth non-central moment based on the study of excess kurtosis [Equations (2,3)].The study of excess kurtosis for rainfall and runoff variables indicates that the fourth non-central moment needs to be considered, except for daily rainfall of Old Portage watershed and daily runoff (discharge) of Independence watershed.With the number of the non-central moments identified, the Lagrange multipliers of the PDF defined in Equation (7) were estimated by finding the minimum of the objective function defined in Equation (9) with the constraints and Hessian matrix given by Equations (11a,b).Table 4 lists the parameters estimated for each watershed.Table 5 lists the relative differences between sample moments and those calculated from entropy-based distributions.Table 5 indicates that the sample moments were well preserved.Further, the goodness-of-fit, i.e., RMSE [Equation ( 18)], the K-S goodness-of-fit test [Equation ( 19)], and the A-D goodness-of-fit test [Equation (20)] were applied to examine whether the maximum entropy-based probability distribution may appropriately represent the underlining univariate rainfall and runoff probability distributions.The P-value was approximated using Miller's approximation for the K-S goodness-of-fit test and Monte Carlo simulation with parametric bootstrap resampling procedure (10,000 parametric bootstrap samples) for the A-D goodness of fit test.The test results in Table 6 indicate that the P-value calculated from both the K-S and A-D goodness-of-fit tests was much higher than the critical level  = 0.05.So the null hypothesis cannot be rejected, that is, the maximum entropy-based probability distribution can appropriately represent the univariate rainfall/runoff probability distributions.The RMSE results in Table 6 show that the corresponding error is also small.In addition, to compare graphically, the maximum entropy-based PDF is compared with the frequency histograms (Figures 2 and 3), which indicate the proposed maximum entropy-based probability density function is able to capture the shape of the frequency histogram.Thus, from both the formal goodness-of-fit statistics and graphical comparison for univariate rainfall and runoff random variables, the univariate entropy-based distribution derived represents the PDF of rainfall and runoff variables well.It is worth stating that the appropriate identification of univariate rainfall and runoff distribution plays an important role in the study of joint and conditional return period in case of extreme behavior of rainfall and runoff variables. Bivariate Rainfall and Runoff Distribution Considering rainfall and runoff as continuous random variables, the copula theory was applied to capture the dependence with a unique copula function C [Equation (12)].Table 7 lists sample Kendall's τ and Spearman's ρ rank coefficients of correlation.Results showed that overall there existed positive dependence structure for all the watersheds studied.It is therefore appropriate to apply copula functions listed in Appendix I.The parameters of the copula function were estimated using the Pseudo-Maximum Likelihood method in which the empirical marginal distribution was applied.Table 8 lists the parameters estimated and the corresponding maximum Log-Likelihood (LL).Table 8 indicates that Galambos copula, belonging to the extreme value copula family, reached the largest maximum LL for watersheds W1, Y2 and Old Portage.However, the Frank copula reached the largest maximum LL for Independence watershed.Note: [a] when θ 1 → 0 converge to Gumbel-Hougaard copula; [b] when θ 1 = 1 BB5 copula is Galambos copula; [c] whenθ 1 = 1 BB7 copula is the Clayton copula. In order to better assess the copula functions estimated using the Pseudo-Maximum Likelihood method, the formal goodness-of-fit analysis was performed to test whether the given copula function may appropriately model the joint distribution using the goodness-of-fit test based on the integral probability transformation discussed in Section 4. The Cramér-von Mises test statistic was calculated using Equations (21)(22)(23).The corresponding P-value was approximated using Equations (24-26) with 10,000 parametric bootstrap samples.Table 9 lists the test statistics and the corresponding P-values forall the copula functions studied.It indicates: (i) the copula functions, reaching the maximum LL, can appropriately measure the full dependence of the rainfall and runoff variables, (ii) for the Independence watershed, the Plackett copula reached a much higher P-value than did the Frank copula, and there exists minimal differences for the maximum LL calculated from the Frank and Plackett copulas (4.5%).Thus, the Galambos copula can be applied to represent the joint distribution for W1, Y2 and Old Portage watersheds, and the Plackett copula can be applied to represent the joint distribution for Independence watershed.Figures 4-5 compare the empirical PDF (CDF) and the parametric PDF (CDF) determined from the fitted copula function for experimental watersheds, i.e., W1 and Y2, and Cuyahoga River watershed, i.e., Old Portage and Independence.The figures indicate that: (i) there clearly exists an upper tail dependence for experimental watersheds W1 and Y2 (joint PDF in Figure 4), (ii) the upper tail dependence for Old portage is not as significant as that of experimental watersheds, and (iii) there is no clear evidence of upper tail dependence for Independence which is an interesting finding through the study of the annual maximum daily rainfall amount and corresponding daily discharge.The findings for watersheds at Old Portage and Independence may be explained by the natural flow of the stream affected by flow diversion, storage reservoirs, and power plants located in the watersheds (USGS).To further assess the above findings numerically, the upper tail dependence coefficient was calculated from both the empirical copula and the copula function candidates (Appendix II).Equations (14a-c) were applied to determine the upper tail dependence coefficient nonparametrically from the empirical copula where the thresholds k in Equations (14a,b) were determined by applying the plateau-finding algorithm [10].The equations listed in Appendix II were applied to determine the upper tail dependence coefficient for the copula functions.Table 10 lists the results of the upper tail dependence coefficient.It shows that the differences are relatively small from the nonparametric estimation (the maximum relative difference being around 10% comparing Equations (14a,b) with Equation (14c) for W1, Y2 and Old Portage watersheds.For Independence watershed, the upper tail dependence coefficient was estimated to be close to 0 from Equations (14a,b), however it reached around 0.43 if Equation (14c) was applied.Again comparing with the graphical finding (Figure 5), Equation (14c) cannot be applied to estimate the upper tail dependence coefficient for Independence watershed, due to the strong underlining assumption of empirical copula approximating the extreme value copula. To this end, the conclusion is that the extreme value copula can be applied to assess the upper tail dependence for W1, Y2 and Old Portage watersheds uwing the Galambos copula.No upper tail dependence was found for Independence watershed and the Plackett copula can be reasonably applied.Thus, in what follows, the Galambos and Plackett copula were applied to study the joint (and conditional) return periods.Note: [a] with b = 1 with threshold; [b] no threshold needed. Return Period of Rainfall and Runoff Events In rainfall and runoff frequency analysis as well as other multivariate hydrologic frequency analyses, the purpose is to estimate the joint and conditional return period (joint and conditional exceedance probabilities) of the extreme events for risk analysis and to provide a framework for engineering design.Following the discussion in Section 3.4, the rainfall and runoff events with given joint and conditional return periods were studied. Joint Return Period of Rainfall and Runoff Events The joint return period (i.e., 25-, 50-, and 100-yr) for the "AND" case was determined following [32] using the most-likely design realization [Equation (15)] discussed in Section 3.4.1.Using Old Portage watershed as an example, Figure 6 shows the procedure for the identification of critical layer and the corresponding rainfall and runoff event (x * , y * ).Considering the Galambos copula belonging to the extreme value copula family, the parametric Kendall distribution is given as: where  is the parameter, i.e., Kendall correlation of coefficient.Graphically, it is seen that the empirical Kendall distribution matches the parametric Kendall distribution function for the Galambos copula fairly well especially for the upper tail (Figure 6a). Figure 6b provides the graphical link for the identification of t which results in the joint K(t) being equal to the nonexceedance probability of 25-, 50-, and 100-year joint return periods.The identified t's are the cumulative probability for the identified critical layer shown in Figure 6c.Using 100-year joint return period as an example, Figure 6d plots the negative log-likelihood of function , [Equation (15b)].The critical event is then estimated by finding the minimum of the negative log-likelihood function.It is worth noting that in case of the Plackett copula applied to the Independence watershed, the Kendall distribution of the Plackett copula needs to be estimated using Monte Carlo simulation with the parametric bootstrap sampling technique as discussed in Section 4.2. Table 11 lists the critical rainfall and runoff events with joint return period of 25-, 50-, and 100-year.The joint return period study indicates that the rainfall and runoff variables for all four watersheds are positively quadrant dependent (PQD) [28] as: , or equivalently: and for illustration purposes, for Old Portage watershed, the exceedance probabilities for rainfall events with joint return periods of 25-, 50-, and 100-year are 0.05, 0.02, and 0.01; the right side of Equation (28a) is calculated as: 0.023, 0.004 and 0.001, respectively.As discussed in Section 3.4.2,both cases were studied for conditional return period analysis.The critical runoff events (y * ) of given conditional return periods are estimated from daily rainfall amount.Table 12 lists the daily rainfall amount with univariate return period of 25-, 50-, and 100-year estimated from fitted entropy-based univariate distribution.Then the conditional return period of Case I (i.e., | was estimated using Equation ( 16) and that of Case II (i.e., | is estimated using Equation (17).Table 13 lists the runoff events obtained for Cases I and II with the conditional return periods of 25-, 50-, and 100-year.Using Old Portage as an example, Figure 7 plots the conditional exceedance probabilities for both cases.Figure 7 indicates that Equation (16) and Equations ( 17) are nondecreasing functions of given rainfall event for all runoff events.It further indicates that rainfall and runoff variables hold right tail increasing (RTI, for case I) and stochastic increasing (SI, for case II) properties.The same results are reached for the other two watersheds modeled by the Galambos copula as well (i.e., W1, Y2).On the other hand, Figure 8 plots the conditional exceedance probabilities for Independence watershed.One may note the minimal difference in exceedance probabilities (return periods) obtained by conditioning on the rainfall events of different return periods for cases I and II.This finding again indicates the RTI and SI properties do not hold for Independence watershed. Conclusions This study investigates the relationship between annual maximum daily rainfall amount and the corresponding daily runoff (discharge) using maximum entropy and copula theories to address the questions arising from the assumptions in the commonly applied approaches and to better estimate risk.The maximum entropy theory is applied to derive the univariate rainfall and runoff distributions.The joint distribution of rainfall and runoff is studied using the copula method.The following conclusions are drawn from the study: (1) The rainfall and runoff variables are fat tailed except for rainfall variable at Old Portage and runoff variable at Independence.Thus, except for these two cases, the fourth non-central moment is necessary to be considered as one of the constraints for the derivation of maximum entropy-based distribution.The maximum entropy-based univariate distribution can successfully model the rainfall and runoff variables, and it also provides the universal solution for the univariate rainfall and runoff frequency analysis.(2) The copula functions capturing the positive dependence structure may appropriately model the bivariate rainfall and runoff distribution.The Galambos copula (belonging to extreme value copula family) appropriately models the dependence between rainfall and runoff variables for watersheds W1, Y2 and Old Portage based on the MLE and formal goodness-of-fit statistics. Similarly, the Plackett copula appropriately models the dependence for watershed Independence.(3) Upper tail dependence is found for watersheds W1, Y2, and Old Portage, and the nonparametric/parametric estimation of upper tail dependence coefficient indicates that the Galambos copula may again model the extreme events which in turn can be applied to study the joint and conditional return periods for these 3 watersheds.(4) No upper tail dependence is found for watershed Independence.It may be explained by the natural flow of the stream affected by diversion, storage reservoirs and power plants located in the watersheds.The fitted Plackett copula can be applied to study the joint and conditional return periods for watershed Independence.(5) The positive dependence structure and joint return period ("AND" case) study of the rainfall and runoff variables show that rainfall and runoff are positive quadrant dependent.(6) For watersheds W1, Y2, and Old Portage, Case I conditional return period indicates the right tail increasing (RTI) property, and Case II conditional return period indicates the stochastic increasing (SI) property.These findings are in agreement with the upper tail dependence identified for the above three watersheds.(7) For watershed Independence, Case I and II conditional return periods indicate that there does not exist RTI or SI (i.e., with given rainfall events of different return periods, the conditional exceedance probability exhibits minimal difference).This finding is in agreement with no upper tail dependence found for the watershed. In summary, the study provides an appropriate framework to link the maximum entropy theory and copula theory in multivariate frequency analysis.This framework may lead to a better study of both univariate and multivariate studies and permit a better estimation of risk and better engineering design (e.g., runoff of a given rainfall event in this study).With different types of watersheds, the study shows that for experimental watersheds (well maintained and minimal human activity induced changes), the dependence and tail dependence structure between rainfall and runoff variables tend to follow the law of natural rainfall and runoff process.For the watersheds Old Portage and Independence belonging to Cuyahoga River basin, even though the positive dependence structure still holds for the whole dataset analyzed, the upper tail dependence is significantly lower.In case of watershed Old Portage, the upper tail dependence is in the range of [0.3, 0.4], and for Independence, there is no upper tail dependence existing.This may be explained by the intensity of human activity induced hydrological response changes.This finding provides an insight that one needs to pay attention to the real world situation when applying the copulas belonging to extreme value copula family (e.g., commonly applied Gumbel-Houggard copula as an example) to study the annual maximum multivariate hydrological time series. Figure 1 . Figure 1.Riesel experimental watershed and Cuyahoga river watershed maps. Figure 4 .Figure 5 . Figure 4. Comparison of empirical PDF and CDF versus parametric PDF and CDF of the best fitted copula function for experimental watersheds: W1 and Y2. Figure 6 . Figure 6.(a) Kendall distribution plot, (b,c) critical layer identification for 50-and 100-year event, (d) critical rainfall and runoff event for return period = 100-year as example. Figure 7 . Figure 7. Conditional exceedance probability estimated for Cases I and II with watershed Old Portage as an example. Figure 8 . Figure 8. Conditional exceedance probability for Cases I and II with watershed Independence as an example. Table 2 . Thiessen polygon weight for Old Portage and Independence. Table 3 . Sample statistics for each watershed. Table 4 . Lagrange multipliers for univariate rainfall and discharge distribution. Note:  1 parameter for ln(X);  2 parameter for X;  3 parameter for X 2 ;  4 parameter for X 3 ; and  5 parameter for X4. Table 5 . Relative differences between sample moments and those obtained from entropy-based distribution. Table 6 . Goodness-of-fit statistics for univariate rainfall and discharge analysis. Table 7 . Rank correlation of coefficients for rainfall and discharge variables. Table 8 . Estimated copula parameters for bivariate rainfall and discharge analysis. Table 10 . Estimated upper tail dependence coefficient. Table 13 . Daily Runoff (m 3 /s) estimated based on Cases I and II for the return period of 50and 100-year with 50-and 100-year daily rainfall amount (mm).
2014-10-01T00:00:00.000Z
2012-09-24T00:00:00.000
{ "year": 2012, "sha1": "78da2ca5bd4cbe9e6a2c344c30c751c96473595a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/14/9/1784/pdf?version=1424784884", "oa_status": "GOLD", "pdf_src": "CiteSeerX", "pdf_hash": "78da2ca5bd4cbe9e6a2c344c30c751c96473595a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
239880795
pes2o/s2orc
v3-fos-license
Authenticity of Aloe vera and Acacia Honey on Wound and their Comparative Wound Healing Efficiency on Lacerated Wound in Rabbit . MATERIALS AND METHODS Experimental animals 30 locally bred, clinically healthy male adult rabbits, weighing between 2 -2.5 kg were purchased from local market of D.I. Khan.Animals were kept in Rabbit Research House, PARC, Arid Zone Research Centre, D.I. Khan.Animals were randomly divided into 3 groups (A, B and C). Clinical examination All animals were kept on uniform feeding regimen and manage mental conditions in the PARC, AZRC, Rabbit Research House, DI Khan.Clinical and laboratory examination was carried out of each animal and animals showing any signs of disease were replaced with healthy ones.An acclimatization period of at least two weeks was provided to the rabbits during which 2 doses of ivermectin (@400microgram/kg; S/C) were provided to each rabbit.Moreover, to avoid the chances of pasturellosis, amoxicillin (@15mg/kg S/C) was injected to all the rabbits for three consecutive days. Premedication and anesthesia Each rabbit from all groups was pre-medicated by administering atropine sulphate @0.035mg/kg body weight through s/c route half an hour prior to surgical intervention.Animals were anesthetized by total parenteral (intramuscular) anesthesia using a mixture of ketamine (35mg/kg) and xylazine (5mg/kg) (Razaini et al., 2004). Contraction % = 100 -(Area of wound on that day) x 100 Area of wound at day 0 Figure A : Figure A: Lacerated wounds induced by sharp blunt scissor in rabbit Fig: 2 Fig: 3 Fig: 2 Graphical comparison of Acacia honey and Aloe vera gel and pyodine on healing time Fig: 4 Fig: 4 Graphical comparison of Acacia honey, Aloe vera gel and pyodine on wound index Figure B : Figure B: Lacerated wound after Acacia honey and Aloe vera application at day 10 It is reсоmmended thаt yоu test а smаll аreа first fоr аllergiс resроnses.It саn be used if there аre nо аllergiс resроnses.
2021-10-26T15:08:16.016Z
2021-08-30T00:00:00.000
{ "year": 2021, "sha1": "8b97b3bdaa33a6c77d6084ad0fbdd3d528017bab", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18782/2582-7146.156", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "38977953fe31ea18a5d49f41f2767fa73b992ed2", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [] }
218676367
pes2o/s2orc
v3-fos-license
Polarization Correlation of Entangled Photons Derived Without Using Non-local Interactions Entangled photons leaving parametric down-conversion sources exhibit a pronounced polarization correlation. The data violate Bell's inequality thus proving that local realistic theories cannot explain the correlation results. Therefore, many physicists are convinced that the correlation can only be brought about by non-local interactions. Some of them even assume that instantaneous influences at a distance are at work. Actually, assuming a strict phase correlation of the photons at the source the observed polarization correlation can be deduced from wave optical considerations. The correlation has its origin in the phase coupling of circularly polarized wave packets leaving the fluorescence photon source simultaneously. The enlargement of the distances between photon source and observers does not alter the correlation if the polarization status of the wave packets accompanying the photons is not changed on their way from the source to the observers. At least with respect to the polarization correlation of entangled photons the principle of locality remains valid. INTRODUCTION In 1935 Einstein et al. [1] initiated a discussion whether quantum mechanics is complete or not. In the following years one could not find concrete hints for the occurrence of hidden variables. In 1964 Bell [2] showed on the basis of two spin 1/2 particles that local realistic theories can principally not reproduce the results of quantum mechanics. In 1969 Clauser et al. [3] proposed an experiment to test local hidden variable theories with entangled photons. Already 3 years later Freedman and Clauser presented first measurements proving that local realistic theories were not able to describe the experimental results [4]. All experiments providing polarization correlation data with good statistics are performed in such a way that the detection processes of two distant observers are spacelikely separated. Thus, the publications on these experiments generally suggest that the results can only be induced by superluminal signals between the observers. Especially Salart et al. [9] emphasize that the violation of Bell's inequality seems to prove that quantum mechanics make use of non-local interactions. Discrepancies between the results of local realistic theories and quantum mechanics are also discussed for more complicated quantum systems with more than two particles [13]. Many of these publications insinuate that faster-than-light communication might be possible. The drawback of all these attempts to prove the occurrence of non-local interactions is that until now no concrete results could be presented which reproduce the experimental findings. In the last few years several recognized physicists try to prove that quantum mechanics does not use non-local interactions [14][15][16][17][18][19][20][21]. The authors show that some mathematical operations like the reduction of a quantum state seem to have non-local consequences. On closer examination these operations only cause changes of the observer's knowledge on the quantum state. The changes thus do not take place in physical space but merely in information space. In fact, the results of the experiments with parametric down-conversion photon sources can be derived from wave optical and quantum statistical considerations without using superluminal signals. There are good arguments to assume that the experiments of Aspect and coworkers with entangled photons emerging from a specific decay cascade of calcium [5,6] can also be explained without using non-local interactions. However, additional tests on the polarization status of the photons would be helpful in order to conclusively answer the question. PHOTON PAIRS ARISING FROM DOWN-CONVERSION SOURCES In the last 22 years several polarization correlation experiments with parametric down-conversion sources have been performed [7][8][9][10][11][12]. If necessary experimental details are taken from the doctor thesis of Weihs [22]. In a BBO crystal ultraviolet photons are converted into two phase coupled circularly polarized green photons with equal energies. The circularly polarized wave packets are immediately decomposed into two linearly polarized wave packets with orthogonal polarization directions. The ordinary beam is vertically polarized. The extraordinary beam is horizontally polarized. Due to the different propagation directions the emission cones of ordinary and extraordinary beam appear on the exit plane as two off-centered circles which intersect each other at two points (see Figure 1). After traversing a compensation plate the reassembled circularly polarized wave packets leave nearly unchanged the two intersection zones. In the polarization correlation experiments with parametric down-conversion sources only the so-called singletconfiguration has been studied. In this configuration the polarization planes of associated photons rotate in the same direction. In statistical average about one half of the photon pairs rotate clockwisely, the other half counterclockwisely. DETECTION OF POLARIZED PHOTONS BY ALICE AND BOB Photons emerging from the two exit sites of the source are guided by optical fibers to the observers. After leaving the optical fibers the wave packets traverse an electro-optical modulator arranged between two suitably oriented quarter-wave plates. In combination the three optically active elements twist linearly polarized waves by an arbitrarily choosable angle proportional to the applied voltage. The detector unit is fixed in space. The twist of the plane waves by the electro-optical modulator simulates a virtual twist of the detector unit. For the sake of convenience it will be assumed in the following that the twisting units are omitted and that the detectors are really twisted in space. By the use of Wollaston prisms Alice and Bob split the incoming wave packets into two equally large components with orthogonal polarization directions. The linearly polarized components hit altogether four detectors which should be highly sensitive in order to detect nearly all incoming photons [11,12]. When the apparatus is thoroughly adjusted the count rates of the detectors should no longer depend on the polarization direction. In the four detector channels each registered pulse is saved together with an individual time stamp. After having finished the measurement the four data lists are compared in order to determine four coincidence rates namely I(α, β), I(α, β + 90 • ), I(α + 90 • , β), and I(α + 90 • , β + 90 • ). Let I 0 be the coincidence rate when the selecting filters are removed on both sides of the experiment. If the losses in the filters are negligible I 0 is also the coincidence rate summed up in the four channels. The two coincidence rates I(α, β) and I(α, β + 90 • ) add up to I 0 /2. The same is true for the coincidence rates I(α + 90 • , β) and I(α + 90 • , β + 90 • ). Thereby one has to bear in mind that coincidence rates exhibit statistical uncertainties. In this article particle as well as wave aspects will be addressed because the correlation of photons detected by Alice and Bob depends on the relative phase of the circularly polarized wave packets accompanying the photons. The derivation of the polarization correlation is mainly based on wave arguments but if necessary particle aspects will also be considered. The terms "wave" and "light" are often used for convenience. In fact, a light beam will always be understood as a stream of independent wave packets with limited coherence length. Only wave packet pairs incorporating entangled photon pairs are strictly phase coupled when they leave the photon source. In the experiment of Weihs [22, p. 63] the coherence length has been estimated to be about 0.1 m. Thus, the wave packets leaving the photon source are very short in comparison with the distance between Alice and Bob thus precluding wave based non-local interactions between the observers. FORMAL DERIVATION OF THE POLARIZATION CORRELATION In wave optics and quantum mechanics one often asks for the phase relation of interfering waves in the detection plane in order to get the interference pattern. In correlation experiments, however, one has to ask for the phase relation of two associated wave packets at the source. The relative phase at the source manifests itself in the overlap integral of the two normalized wave packets. The two wave packets simultaneously leaving outputs A and B have a phase shift of ± 90 • at the source. The sign reveals which of the wave packets is leading. In Figure 1 the phase shift is indicated by twisted rotation vectors. If α = β an additional phase shift of ± (α − β) has to be taken into account. The sign depends on the rotational direction of the two circularly polarized wave packets. Thus, the total phase shift of the two linearly polarized partial waves looked for by the two observers is Neglecting the envelope function one has to evaluate the overlap integral of the two normalized functions The second function divided by the normalizing factor can be converted by using trigonometrical addition theorems twice By using the definite integrals one can easily calculate the overlap integral The (absolute) square of the overlap integral of the two normalized phase coupled wave packets is proportional to the coincidence rate. As has been explained in the previous chapter the coincidence rates I(α, β) and I(α, β + 90 • ) add up to I 0 /2. Therefore, the proportionality factor must be I 0 /2. Thus the coincidence rate is given by and the correlation is given by With this rather simple consideration the experimentally found correlations of entangled photons have been fully reproduced. WORKING OUT QUANTUM STATISTICAL ASPECTS Quantum statistics will become much clearer if each of the two circularly polarized light beams A and B leaving the source is formally splitted into two commensurate linearly polarized beams with orthogonal polarization directions. A circularly polarized wave can always be understood as the superposition of two equally sized linearly polarized partial waves with orthogonal polarization directions. The two partial waves are phase shifted with respect to each other by ± 90 • . The orientations of the linear polarizations ϑ and ϑ + 90 • can be freely chosen. The photons contained in the two partial beams form two disjunct groups. If a photon has been assigned to a linearly polarized partial beam it will always stay in that beam. There is no intermixing between the two photon groups on their way from the source to the observers even if the photons and the accompanying wave packets traverse electro-optical modulators and quarter-wave plates. All modern experiments are planned with the aim that selection and detection processes carried out by the two observers are spacelikely separated. Therefore, the splitting is performed just in front of the detectors. The rather late fixing of the angles α and β even concerns photons leaving the source much earlier. Thus, the splitting of the circularly polarized beams admittedly needs non-local information but certainly no nonlocal interaction because the two streams of photons propagating toward Alice and Bob are not modified by the repeated change of the detection angles. Before the photons reach the associated Wollaston prism the splitting procedure is a purely mathematical but not a physical process. Due to their common origin entangled photon pairs are phase coupled when they leave the source. In case of parametric downconversion processes the two entangled photons are in phase but the two associated circularly polarized wave packets are phase shifted by ± 90 • . As the optical pathes from the source to Alice and Bob will generally not be balanced the initial phase information cannot be recovered by simply comparing the arrival times of the entangled photons. This would merely be impossible due to the limited time resolution of external clocks and to the jitter of the detection electronics. Fortunately the two beams are equipped with synchronized internal clocks which can be easily read off by the observers. Within one wave cycle the polarization plane performs a full turn. Thus, the relative phase of the photons at the source up to multiples of 180 • can be recovered from the difference of the polarization angles looked for by the two observers. The modulo 180 • term comes from the 180 • periodicity of the polarizer's transmittance. The polarization correlation with due regard to the particle aspect will be derived in two steps. At first the case α = β will be discussed. This step covers the crucial point in the line of arguments explaining why the entangled photons are statistically distributed to only two of the four possible coincidence channels. The two partial beams A(α) and B(α + 90 • ) are in phase (or opposite in phase) at the source. The same is true for the partial beams A(α + 90 • ) and B(α). As the photons are in phase at the source they must be found either in the coincidence channel A(α)/B(α + 90 • ) or in the coincidence channel A(α + 90 • )/B(α). As the two coincidence channels are equivalent the probabilities to find the entangled photon pairs in these two coincidence channels must be equal. In contrast, the partial beams A(α) and B(α) are phase shifted at the source by ± 90 • . That means they are orthogonal to each other. The same is true for the partial beams A(α + 90 • ) and B(α + 90 • ). Therefore, there will be no coincidences in these two coincidence channels. The considerations above prove that the two entangled photons are both contained either in the partial wave pair A(α) and B(α + 90 • ) or in the partial wave pair A(α + 90 • ) and B(α). Whether the photon is detected by detector A(α) or by detector A(α + 90 • ) is purely accidental. One cannot predict which detector will be hit by individual photons. However, after the detection of the first photon of a photon pair for example on Alice's side it will be clear which one of the two detectors on Bob's side will be hit by the second photon. Only the anti-correlation of entangled photons is predefined but not the polarization of individual photons [23]. This is why the polarization direction should not be thought of as an element of reality. The phase relation of partial beams at the source thus leads to the strong polarization correlation although the information on the polarization status is not a hidden property of the photons. Einstein et al. [1] had claimed that a property equally found in two no longer interacting quantum states must be an element of reality. The pronounced polarization correlation of entangled photons seems to be a counterexample. The wrong estimate of Einstein and his coworkers has entailed the erroneous approach of Bell [2] who assumed that the polarization directions are real properties of the photons. In fact, the phase coupling only predefines the interrelationship but not the property itself. In consequence Bell's inequalities are irrelevant. The extension of the consideration to the case α = β is rather trivial and exclusively rests on an optical law discovered by Etienne Louis Malus in 1810. Malus' law says: If light linearly polarized in direction γ traverses a polarization filter with its polarization axis oriented in direction δ its intensity is reduced by the factor cos 2 (γ − δ). One cannot predict which one of the photons will traverse the polarization filter because Malus' law has a purely statistical character. The law is valid not only for light leaving a classical light source but also for laser light. That means it does not depend on second-order coherence properties of a photon stream. It is also experimentally proven in case of low intensity when the beam intensity is measured by single photon detectors. Brukner and Zeilinger explicitly show that Malus' law is also valid in the quantum regime [24]. In one of his recent publications Khrennikov has also used Malus' law when he derived the polarization correlation of entangled photons starting from quantum mechanical considerations [16, p. 3]. The first of Equation (8) means that if one of the entangled photons has been recorded by detector A(α) the associated photon will certainly be contained in the partial beam B(α +90 • ). Therefore, one has to apply Malus' law for γ = α + 90 • and δ = β. That means the coincidence rate I 0 /2 is reduced by the factor cos 2 (α+90 • −β) = sin 2 (α−β). Therewith the coincidence rate I(α, β) is given by in accordance with Equation (6). The role of Alice and Bob can be exchanged. If the circularly polarized beams are splitted into partial beams linearly polarized Frontiers in Physics | www.frontiersin.org in the directions β and β + 90 • the results presented above will be reproduced. For α = β Malus' law with its inherently statistical character has to be applied on Alice's or on Bob's side. In this case the correlation C(α, β) is larger than zero and smaller than unity. Thus, the correlation is not defined for a single pair of entangled photons but only for a sufficiently large group of entangled photon pairs. As has been proven above the piece of information responsible for the emergence of the pronounced correlation is the phase shift of two associated wave packets when they leave the source. Traditionally quantum mechanics strictly takes into account phase differences of wave functions contained in a matrix element. Therefore, it can be assumed for sure that the phase difference of the two entangled photons will also be considered in quantum mechanics. It is not relevant whether the correlation problem is handled classically or quantum mechanically. It is only relevant whether the phase information is used or not. The calculations based on local realistic theories do not consider phase relations. They only try to reproduce the polarization correlation by assuming that the polarization directions of the entangled photons are encoded in the photons as hidden variables. In explaining the strong polarization correlation of entangled photons only their relative phase at the source is relevant. GENERAL REMARKS The pronounced correlation of entangled photons is neither superprising nor mysterious. It solely depends on the initial phase shift of the circularly polarized waves accompanying the entangled photons. One only has to make sure that the polarization directions α and β looked for by the two observers are associated with corresponding polarization angles at the source. This condition is fulfilled in each of the experiments. Hereby it is not relevant at what time the polarization directions have been chosen. The purely conceptual splitting of the two partial beams and the detection of the photons have no effect on the parametric down-conversion process. The relative phase of the entangled photons has been fixed inside the source. The observers only decide which polarization directions they look for. There is no need for a superluminal information transfer between the observers. The distance between the observers is absolutely irrelevant. The relative phase of entangled photons at the source could be declared to be a hidden variable finally revealed by the coincidence detection process. Hidden variables of this type can only be associated with wave packets but not with particles. The decisive point of the argumentation is that the wave intensity and thus also the coincidence rate is proportional to the (absolute) square of the scattering amplitude. Properties are only manifested after squaring the overlap integral. In particle based considerations properties directly act upon counting rates. Bell's inequality is misleading because it attributes properties like polarization directions to particles and not to waves. Therefore, Bell cannot take into account phase differences of entangled photons. In future one should ignore violations of Bell's theorem because Bell's considerations are not adequate to describe wave phenomena. CORRELATION OF PHOTON PAIRS IN TRIPLET CONFIGURATION A pronounced correlation of entangled photons should also be observable in triplet configuration. That means that the two circularly polarized waves are rotating in opposite directions. In this case the correlation cannot be derived as easily as in the singlet case. One can figure out that the triplet configuration arises from the singlet configuration by mirroring one of the circularly polarized waves at a vertical plane. This can be performed by a half-wave plate with the optical axis oriented in vertical direction. If the circularly polarized wave packets are phase shifted by ± 90 • the correlation should be Thereby the origins of the angles α and β have to lie in the vertical plane. Preliminary measurements of Weihs [22, p. 72] support this result. For example if the two observers both look for polarization directions parallel to 45 • the coincidence rate is at a maximum. In a former publication [25] the sign in the correlation equation for the triplet configuration was minus instead of plus. The sign change has to do with the fact that Bob's coordinate system was left-handed in the previous article. In the consideration above both coordinate systems are right-handed. PROPERTIES OF PHOTON PAIRS ARISING FROM ATOMIC SOURCES In the experiments with parametric down-conversion sources the two circularly polarized wave packets are phase shifted by ± 90 • leading to a strict anticorrelation of the linear polarizations. In contrast, in the experiments of Aspect et al. [5,6] the two circularly polarized wave packets are in phase or opposite in phase. Therefore, the correlation is given by DOES IT HELP TO POSTULATE NON-LOCAL INTERACTIONS? Is it really helpful to postulate a novel interaction which is in serious conflict with special relativity? Postulating an information transfer faster than light entails a wealth of new problems. An instantaneous influence at a distance requires that simultaneity can be strictly defined for distant locations in contrast to corresponding assertions of special relativity. Even if such principal objections are ignored many practical problems arise. How could such a postulated interaction generate correct results? In correlation experiments the ratio of coincidence rates in two complementary channels I(α, β) and I(α, β + 90 • ) has to be precisely defined. The newly postulated interaction has to redirect a wellspecified percentage of stochastically arriving photons from one channel to the other one. The expected ratio of coincidences in the two channels depends on the difference of the polarization directions α and β? How does the postulated interaction get the information on the angles? In the experiments the twisting angles α and β are generated by applying voltages to electro-optical modulators. How could any theory whatsoever associate a voltage to an angle? The proportionality factor depends on the material, on the orientation of the crystal axis and numerous other experimental details. Actually, in the optical fibers spurious birefringent effects occur which are manually compensated. How can the postulated new interaction know whether the apparatus is well-adjusted or not? By the way all the twisting processes are frequency dependent. Only light composed of photons like those used in the experiment can gain the information on the adjustment status and on the angles α and β. The experiment of Salart et al. [9, p. 863] shows that the postulated "spooky" interaction must be at least 50,000 times faster than the speed of light. If the lengthes of the optical fibers differ distinctly from each other the superluminal signal has to wait quite a long, but an extremely welldefined time interval before it redirects individual pulses from one output to the other one. It will be extremely difficult to embed such a delayed reaction in a serious physical theory. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/supplementary material. AUTHOR CONTRIBUTIONS The author confirms being the sole contributor of this work and has approved it for publication.
2020-05-19T13:08:29.548Z
2020-05-19T00:00:00.000
{ "year": 2020, "sha1": "bbdb21dfe3150245596aa52f58148f8e27b64086", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphy.2020.00170/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "bbdb21dfe3150245596aa52f58148f8e27b64086", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259513311
pes2o/s2orc
v3-fos-license
A multi user‐centred design approach framework to develop aesthetically pleasing and sustainable OTCP packaging With the healthcare industry moving towards self‐medication, the number of self‐service pharmacies stocking over‐the‐counter pharmaceuticals (OTCP) is rising. The aesthetic attributes of OTCP packaging are critical to attract consumers' attention against competing products. Moreover, sustainable design aims at minimising the negative environmental impacts of packaging. Studies show that stakeholders' interests should be represented more in pharmaceutical packaging, specifically in the early stages of the design process. For this reason, OTCP packaging design is challenging, as sustainable packaging is typically seen as unappealing. Within this context, this paper presents a novel and comprehensive framework aimed at supporting designers to develop aesthetically pleasing and sustainable OTCP packaging, placing multiple users at its core. Studies with OTCP packaging stakeholders were first conducted to identify the framework requirements together with the necessary OTCP packaging attributes. A framework architecture was developed and subsequently implemented in a proof‐of‐concept computer‐based tool. The framework and its implementation were evaluated with the OTCP packaging development stakeholders. Results provide a degree of evidence that the framework contributes significantly to guide OTCP packaging designers in taking the right decisions and can also provide the first steps towards considering aesthetics and sustainability in the packaging design in other sectors, namely, food and beverage. polyvinylchloride in the pharmaceutical sector for blister packaging. 2 The environmental impact of food packaging has been researched in great depth. [3][4][5] However, a limited number of studies, which assess the impacts of pharmaceutical packaging throughout its life cycle, [6][7][8][9] have been published. This could be because of the strict regulations and standards that pharmaceutical packaging must abide by. 10,11 Belboom et al 8 To generate a holistic assessment of the sustainability status of pharmaceutical packaging, the social and economic pillar must be considered. By taking user requirements into account, packaging designers can understand how to feasibly integrate emotive aspects in sustainable pharmaceutical packaging design. Designing to the user's needs and abilities directly supports the economic pillar by enhancing the packaging utilisation and quality. This will ensure that the packaging developed is cost-effective. A user-centred approach in packaging design promotes the well-being, health and engagement of the enduser. 12 Inclusive pharmaceutical packaging is essential, as the potential user groups include elderly people who may have age-related difficulties, for example, in opening the packaging. 13 Although the packaging should be designed in such a way that the pharmaceutical is easily accessible, the designer must also incorporate child-resistance features to promote child safety. 11,12 Over-the-counter pharmaceuticals (OTCP) are bought without a prescription and used to treat minor ailments, such as mild pain and common colds. As the medication is sold without a prescription, the consumer is typically not influenced by authorities in the field, such as doctors and pharmacists. 14 Self-medication is becoming increasingly common with consumers. 15 This is accommodated by the number of self-service pharmacies, groceries stocking OTC pharmaceuticals and those being sold over the internet. [14][15][16][17][18] A study conducted by More and Srivastava 19 considered data from chemists who stated that consumers remember OTCP products by their aesthetic attributes. They have the opinion that, if OTCP companies focus on aesthetics, sales can increase. 19 Colours can be used to convey the price and quality of the product. 19,20 They are also used to attract consumers' attention and influence their emotional responses towards the product. [21][22][23] The OTCP packaging's aesthetic appeal acts as the silent salesman and greatly influences the consumer's decision-making process. 24 A study conducted by Kauppinen-Räisänen and Luomala 25 found yellow, red and blue to be the colours typically preferred for OTCP packaging. Colours have different meanings and associations depending on the geographical location. 14 For example, in Japan painkillers are grey and blue; whereas, the Americans associate painkillers with red. 14 Consumers use packaging materials to judge how sustainable the packaging is. [26][27][28] Blister packs and bottles are the two types of OTCP packaging commonly used for solid dose tablets. The type of packaging will affect the overall sustainability status of the product. Blisters cannot be recycled due to adherents and surface treatments of materials. 29 Without material separation, they cannot be separated upon disposal, as the packaging is made from multi-materials. Bottles provide the opportunity for monomaterial packaging systems, where only one material is used throughout, 30 for example, a plastic bottle with a plastic cap. Bottles offer less protection to pharmaceuticals than blister packaging. 29 Consumers can only perceive the product's sustainability through the packaging cues. 31 Designers can add cues to communicate its sustainability status, including colour, imagery and labels. This implies that the packaging's visual appearance plays an important role when consumers aim to make sustainable decisions. Greenwashing describes a packaging carrying misleading sustainability cues, such as being paper-based with a minimalistic design or is of the colour green, without having the verbal information to accompany it. 32 It was shown that consumers' product perception positively increased when the colour green was accompanied with a label. 32 Despite this, sustainable packaging is typically seen as less aesthetically appealing than conventional packaging. [33][34][35] For this reason, designers must strike a balance between the sustainability and aesthetic attributes. Consumer demand for sustainable packaging in low risk products, such as food, has been greatly researched. [36][37][38][39] However, to the authors' knowledge, there are no studies that investigate the importance the consumers give to packaging sustainability in high-risk products, such as pharmaceuticals. The development of sustainable pharmaceutical packaging is still in the initial phases. 40 Research on sustainable packaging design has been stimulated by consumer demands and government standards. This produced various packaging design support systems, which are available to packaging design stakeholders. 41 However, the high number of design support systems creates incongruity in sustainable packaging design. Ma and Moultrie 42 suggest that designers may not know which tool is best suited for the design stages or what results to expect from each design support system. Studies show that stakeholders should be better represented in early stages 43 of the OTCP packaging design process. 13,[44][45][46][47] It was also shown that in recent years, the need for environmental impacts of packaging to be considered in a more systematic and holistic way has been realised. 48 Poslon et al 49 stated that the packaging's appearance evokes emotions in the consumer, thus contributing to a positive experience and raise their expectations of the product inside. 24 Given the complex balance that the packaging designer must strike to ensure that all requirements are met, a design support framework can be used to facilitate this process. To the authors' knowledge, there are currently no studies that describe the design process for sustainable pharmaceutical packaging. Therefore, to understand how to design OTCP packaging, methodologies proposed by Pahl et al 50 and Bix et al 51 can be adopted. Pahl et al 50 describe a model to design products, which starts with the task clarification stage. In this stage, the problem is formulated, and a list of specifications is defined. These specifications are then used as a reference to generate design concepts in the conceptual stage. The most appropriate design concept is optimised in the embodiment stage. The detailed design stage finalises the documentation related to the product development. Bix et al 51 developed a model for packaging design in general. Similarly, the packaging is planned in the initial stages of the design process. Following this, packaging design concepts are generated, and the packaging system is defined to include all packaging components. The manufacturing and graphic requirements for the packaging are then specified and refined. An extensive literature review indicates that there is a gap in investigating the aesthetics and sustainability aspects in OTCP packaging design. This motivated an evaluation of 10 design support systems, three of which support design in the pharmaceutical industry. However, this resulted in a gap in design support systems, as none of them satisfied all the established review criteria. This raises the following research problem, concerned with developing a framework for a computer-based tool, aimed at guiding designers when developing sustainable and aesthetically pleasing OTCP packaging. To address this research problem, this paper presents a novel and comprehensive approach framework, ASSIST-OTCPP. 52 The material and methods used in this study are described in the second section, followed by a review packaging design support systems. The computer-based proof-of-concept implementation of the ASSIST-OTCPP 52 framework is presented. An evaluation of the framework and computer-based tool follows. The results obtained are discussed, and subsequently, conclusions are drawn, highlighting this work's contribution in packaging design. | MATERIALS AND METHODS ANSYS Granta EduPack 53 is a software for materials education, typically used for students and academics. It provides a database of materials and process information to assess sustainability in engineering design. In this study, it was employed to gather data on the sustainability metrics of OTCP packaging materials. This data was used throughout the environmental and cost analysis of typical OTCP packaging applications. It was chosen because it was an accessible software that the authors were familiar with and provided a sufficient depth of data to conduct the environmental and cost analysis within the framework. Three types of methods were employed throughout this study, which are a literature review, market research and observation through data collection instruments. A literature review of related studies of aesthetics in OTCP packaging, sustainable packaging and knowledge generation for design support systems was conducted to understand how to develop a framework, which guides OTCP packaging designers. An adapted design process model for OTCP packaging was also developed through the literature review. Market research was carried out to set a benchmark on existing OTCP packaging on the market. This was used to generate the knowledge within the framework, whereby, the OTCP packaging available on the market was used as case studies in aesthetic and sustainability analyses. Observations of OTCP packaging stakeholders through semistructured interviews and surveys generated insight on the requirements to develop the user-centred framework. The stakeholders encompass (i) consumers who will be purchasing the OTCP product, (ii) pharmacists who will be dispensing the OTCP product and (iii) OTCP packaging development stakeholders who will ultimately be using the framework. Semi-structured interviews with eight OTCP packaging development stakeholders were first conducted to identify framework requirements. The participants were three females and five males, with five Maltese participants and three international participants (the Netherlands, India and England). A good mix of design experience was achieved despite the small sample size, as participants had a range of 2-25 years of experience in handling OTCP packaging. Small sample sizes of between five and 10 participants is sufficient to reach saturation in a qualitative research, 54-56 as the depth achieved in the data leads to a more complex internal consistency. 57 The participants were first recruited through social media, where they were given a letter of information describing the objectives of the study, and they were asked to sign a consent form to allow the interviews to be recorded and transcribed. A thematic analysis was conducted using QSR International's NVivo 12 software. 58 The identified themes were tested for intercoder reliability that resulted in an average Cohen's kappa score of 0.71, which proves that there is substantial agreement between the codes. 59 In addition, surveys with consumers and pharmacists were carried out to understand the requirements for aesthetically pleasing and sustainable OTCP packaging. The participants formed are part of the Maltese adult population. Both surveys were tested for reliability and validity, using IBM Statistical Package for Social Sciences software. 60 The Person's coefficient for the consumers' and the pharmacists' survey were 0.984 and 0.868, respectively. The Cronbach's alpha score of the consumers' survey was 0.903 and 0.986 for the pharmacists' survey. Pilot tests with 14 consumers and 10 pharmacists were then conducted, which proved that the proposed data collection instrument was appropriate to reach the objectives of the study. The consumers' survey was carried out with 102 participants in total, ranging in age, gender, level of education and location of residence. The pharmacists' survey was carried out with 48 participants. The participants' demographic strata are in line with the Malta population norms. The results from this study contributed to the development of the corresponding stakeholders' knowledge base in the framework. The results of these surveys are presented throughout this paper. To evaluate the framework and its prototype tool implementation, further semi-structured interviews were employed as data collection instrument. | REVIEW OF PACKAGING DESIGN SUPPORT SYSTEMS The semi-structured interviews with eight OTCP packaging development stakeholders provided insight on what is required from a framework used to design OTCP packaging. These interviews also defined the essential engineering characteristics of sustainable and aesthetically pleasing OTCP packaging. Participants were asked how likely they are to use a framework to guide them when integrating aesthetic and sustainability considerations in the design of OTCP packaging. All participants stated that they would 'Always' use such a system. The results of the interviews indicated that a framework would-be should (a) use OTCP packaging on the market as case studies in aesthetic and sustainability analyses, (b) suggest aesthetic qualities based on the target consumer demographics, (c) inform designers on standards to observe, (d) be implemented as a computer-based tool rather than paper-based design guidelines and (e) be used throughout the task clarification design stage. The design framework requirements were used to critically review design support systems used in the industry. The critical review was conducted to determine whether they are adequate to design sustainable and aesthetically pleasing OTCP packaging and to evaluate trends in the packaging design support systems. A set of references was created to define the literature search. Searches were conducted using Google, Academia and Design Society. The OTCP packaging development stakeholders emphasised the importance of striking a balance between sustainability and aesthetics. Therefore, it is essential to consider a multidisciplinary approach to OTCP packaging design such as those in previous studies. 61-70 However, none of these design support systems are related to packaging, so the critical review concentrates on design systems to support packaging design. The term 'packaging' was searched with terms such as 'design tool' and 'design support'. The search was restricted to studies in English from 1997 onwards, as the initial and fundamental sustainable design publications are from this time. Duplicates were excluded from this selection. In total, 10 design support systems were reviewed, which were classified into two categories, pharmaceutical 71 ÀConceptual sectors, such as the fast moving consumer goods industry. 52 This was also mentioned in the interviews with OTCP packaging development stakeholders. However, the participants noted that while shelf appeal is not as important, the visual and verbal elements facilitate identification, which is an important requirement in the OTCP packaging industry. It was found that identification using colour cues reduce medication error. 80 Similarities in packaging have contributed to incorrect medication administration. 81 Therefore, design systems for packaging of other industries are not suitable to guide OTCP packaging designers. Table 2 presents the strengths and limitations of the design support systems which support designers when developing sustainable pharmaceuticals and pharmaceutical packaging. Out of the 10 design support systems reviewed, only three con- The ASSIST-OTCPP 52 framework is characterised by 10 steps, as depicted in Figure 3. To illustrate these steps, the following case study is considered. A designer is tasked with improving the packaging for an OTC 'Pharmaceutical X', which meets the clients functional, aesthetic and sustainability requirements. 'Pharmaceutical X' is a soliddose tablet, used as a pain relief for women during menstruation. The tablets weigh 0.8 g each and are 8 mm in diameter and 6 mm long. Each pack should contain 48 tablets. The tablets are sold in Europe, in plastic bottle packaging with a plastic cap. The bottle is 5 cm in diameter, has a height of 8 cm and weighs 7 g. The cap is 4 cm in diameter, has a height of 3 cm and weighs 5 g. Step 1: The designer receives the above OTCP packaging proposal with the target consumer demographics (in this case, women of any age and level of education) and OTCP tablet specifications (tablet dimensions and weight and pack size of 48 tablets). From the survey with consumers, it was found that the level of education influences how often consumers purchase OTCP products. More and Srivastava 19 also found that educational level influences OTCP consumption. Education plays a role in the person's well-being and overall health, as it is a predictor of health outcomes. This is due to the fact that a higher level of education allows for more opportunities and generally, a higher income. Financial disadvantage increases the chance for chronic stress, as there is a reduced access to healthpromoting facilities such as clinic visits. 82 Therefore, consumers with a lower level of education would turn to self-medicating to treat the symptoms of chronic stress, such as paracetamol. [83][84][85][86] Other studies 87-91 also found that age influences the frequency of OTCP purchase. Step 2: The information on the target consumer demographics and tablet specifications are inputted into the framework and passed through the inference engine. If the framework is used to improve existing packaging, the OTCP packaging specifications (dimensions and weight), which will be present on the product proposal, are also inputted into the framework. F I G U R E 1 Graphical summary of relevant design support system with respect to the review criteria. F I G U R E 2 Adapted design process model for over-the-counter pharmaceuticals (OTCP) packaging. Figure 4 shows the graphical user interface (GUI) of the computer-based tool, where the designer would be able to input this information through the GUI. Step 3: The inference engine inputs this information into the knowledge base, which also contains knowledge from the knowledge acquisition module, represented by IF-THEN rules. Step 4: The knowledge base module transfers this knowledge into the inference engine. Step 5: The inference engine outputs the knowledge to the user, from the knowledge acquisition and modelling frame. Step 6: The sustainability assessment of OTCP packaging. The GUI for this step is presented in Figure 5. The five sections of Step 6 consist of the following: Step 6a: Environmental and cost analysis of OTCP packaging solutions. Step 6b: Ranking of the perceived sustainability of OTCP packaging materials. The radar chart ranks the level of the perceived sustainability by consumers of typical OTCP packaging. These rankings were obtained from surveys with consumers. From this survey, it was found that consumer demographics influence the perceived sustainability of OTCP packaging materials. As bio-plastic bottles are perceived as being the most sustainable packaging by the target audience of this case study, the packaging should communicate the sustainable design considerations taken in its design, if bio-plastic bottles are not used. Step 6c: Analysis of the energy use during the manufacturing of OTCP packaging solutions. Data was again collected from ANSYS Granta EduPack, 53 Step 6d: Analysis of packaging weight and packaging volume per 1 g of tablet compared with similar OTCP packaging on the market. This case study considers improving the plastic bottle packaging for 'Pharmaceutical X'. The ASSIST-OTCPP 52 framework calculates the ratio of packaging weight and volume to 1 g of the tablet and compares it with the ratio of similar plastic OTCP bottle packaging on the market. In this case, the current packaging solution has a good ratio of the packaging to product weight. However, it has a high ratio of packaging volume to product weight, so the designer must consider reducing the overall volume of the packaging for the current pack size of 48 tablets. Step 6e: Analysis of common disposal methods for packaging materials in different markets. ASSIST-OTCPP 52 presents a pie chart of the most likely end-of-life routes for OTCP packaging materials. This is dependent on the market in which the OTCP product is sold. This gives the designer an indication on whether a design for recycling approach will be beneficial, based on consumers' most commonly adopted disposal methods. This data was gathered from 2018 waste statistics databases online. For this case study, plastic bottles have a 40% chance of being recycled upon disposal in Europe, as that is where 'Pharmaceutical X' is sold. Step 7: The aesthetic attributes of OTCP packaging are analysed. The GUI for this step is presented in Figure 6, and includes three sections, marked as (7a) to (7c). F I G U R E 5 Graphical user interface (GUI) of the implemented tool, corresponding to Step 6 of the ASSIST-OTCPP 52 framework. Step 7a: Analysis of the colours on the OTCP packaging and the type of packaging solution used. The primary and secondary colours of different OTCP packages on the market are compared. These are categorised based on market and type of OTCP product. The packaging type, whether it is a blister or a bottle packaging, is also displayed to the user. In this case, in Europe, the packaging is primarily white, with blue and black colours also present. To make the packaging stand out on the shelf, colours other than these can be used on the packaging. Step 7b: Analysis of emotions elicited in the consumer by different coloured branded and generic OTCP packaging. In the surveys with consumers, participants were shown branded and generic OTCP packaging prototypes of different colours; blue, which is the brand colour of the OTCP product, green and carton, which are typically associated with sustainable packaging. The participants were asked to note which of the 14 emotions depicted on PrEmo cards 92 were elicited by each of the OTCP packaging. These emotion cards were proposed by Desmet 92 and are categorised by seven positive and negative emotions. This is shown to the designer, to understand the effect of branding and the brand colour. A brand is an identity used to differentiate between products. Good branding should minimise the mental pressure that consumers feel during the decision-making process by reducing the perceived product risks. When the branded and generic prototypes were compared, it was found that the branded packaging was always preferred. As colours are the most striking visual element of the packaging, consumers form an association between the colour and the brand. 31 Green packaging was found to be the least preferred out of the three colours for both the branded and generic packaging. While both carton and green coloured packaging typically have the association of sustainability in packaging, 32 for OTCPs, green was disliked. This could be because certain hues of green are associated with sickness. 93 Step 7c: Comparison of consumers' preference to visual and verbal elements of OTCP packaging and QR codes. The survey with consumers also asked participants whether QR codes can replace the informational leaflet and images present on the packaging. It resulted that consumers prefer graphical elements on the packaging over QR codes. Step 8: The OTCP packaging standards are presented to the user, organised according to the packaging's life cycle stages. The standards relate to the packaging's functional characteristics, the testing of packaging materials, filling and assembly, sterilisation, printing and labelling and end-of-life, as shown in Figure 7. Step 9: Steps 6, 7 and 8 give the designer indications on how to design aesthetically pleasing and sustainable OTCP packaging, with a consumer-focused approach. Step 10: A QFD Two methods for criteria analysis are the Analytic Hierarchal Process (AHP) and the Analytic Network Process (ANP). 94 In AHP, the F I G U R E 6 Graphical user interface (GUI) of the implemented tool, corresponding to Step 7 of the ASSIST-OTCPP 52 framework. decision-making problem has a hierarchal structure, where the goal is at the highest level followed by the criteria that are decomposed into sub criteria. Alternatives are derived from the sub criteria. 95 Contrarily, in ANP, a network structure is adopted to analyse the importance between different levels of elements and the criteria's importance. 96 The ANP algorithm was used because of the high levels of interaction between the elements of aesthetically pleasing and sustainable OTCP packaging. ANP is typically used with QFDs 97-99 to establish the internal relationships between the consumer requirements and engineering characteristics and external relationships between engineering characteristics and consumer requirements. 100 The GUI for this step is presented in Figure 8. Step 10 is sectioned into two: the QFD table and the circular bar chart, highlighting the weights of engineering characteristics. The QFD table ranks engineering characteristics (top of table) with respect to consumer requirements (left side of table). The consumer requirements, a result of the consumer surveys where participants were asked to rank OTCP packaging characteristics, are weighted. It was found that these ranks are dependent on the gender of the consumer, so in this case, the ranking weights are reflective of women's preference of OTCP packaging characteristics. The engineering characteristics presented in Figure 9 were derived from the thematic analysis conducted on the interviews with From this step, the designer is guided to prioritise the functional characteristics of the OTCP packaging, namely, its compatibility with the pharmaceutical. They will also be aware that the sustainability status of the packaging is important and should prioritise the minimisation of the exterior volume of the packaging. Regarding the aesthetic attributes, in this case, it can be seen that verbal elements on the packaging are more important than the visual. 16,19 In the medical domain, colour is used as an identification aid to reduce medication error, 80 and incorrect medication has been taken because of the similarity in packaging. 81 This framework uses OTCP packaging as a basis for the aesthetic attribute's comparison to display what colours are most common based on the type of OTCP product (such as painkillers or vitamins). The response to colours is dependent on the country, 14 and the framework categorises the colours based on market. Two participants suggested including more regulatory aspects, which are dependent on the active ingredient. This system does not consider the chemical aspects of the pharmaceutical but uses its specifications as a basis. | EVALUATION OF ASSIST-OTCPP One other participant noted that 'it is a generic tool, in the sense that it is not customised to a particular product or company. So, it has to be open.' As a generic framework was generated, each company would need to tweak this framework and implement the tool to meet the necessary packaging functional requirements. One participant gave the example of an OTCP product that is required to have blister packaging. Therefore, while options for bottles are given, this framework gives suggestions on how to improve blisters for sustainability and suggests aesthetic qualities, based on target consumer demographics. Given these limitations, all participants agreed that the framework and tool would be an asset in their design practice and would consider using them. One participant stated that 'If I had it, I would always use it. Because that will be of fantastic value to anyone of, designers in the future.' | DISCUSSION A literature review shows that even with a high number of design support systems available, 41 designers may not know which design support system is most adequate for their design practices. 42 Furthermore, designers lack support when understanding stakeholder requirements. 13,[43][44][45][46][47] The critical review provides a degree of evidence that there is a research gap in the development of a framework aimed to be used during task clarification and which supports OTCP designers when considering sustainability and aesthetics. The proposed framework has the potential to result in improvements by ensuring that the stakeholder requirements and the identification of OTCP packaging are met through the aesthetic attribute analysis. Sustainable packaging contributes to a stronger environment, society and economy, as it reduces the negative impacts. This framework guides OTCP packaging designers to consider alternate forms of OTCP packaging. The impacts on the three pillars of sustainability are dependent on the type of packaging; therefore, the framework informs the designer on the environmental and cost impacts of the OTCP packaging solutions. The framework and computer-based tool incorporates knowledge generated from studies with OTCP packaging stakeholders, which was highlighted as a strength in the evaluation of the design support system. The participants, however, also suggested that the aesthetic attributes could also include the surface finish of the packaging, such as varnishes. Additional aesthetic attributes of the packaging could be included in the computer-based tool implementation of the framework in the future. If this limitation can be overcome, the framework can be used to formulate a problem statement in terms of sustainable, aesthetic and stakeholder requirements. The interviews with OTCP packaging development stakeholders disclose overlapping requirements between food and pharmaceutical packaging. Both types of packaging are required to protect the edible product inside and assure hygiene and quality to the consumer. 2 Given these similarities, the framework and its implementation in a tool can also be used to design food packaging. The OTCP packaging development stakeholders also noted that the main difference is that shelf appeal in food packaging is more important. | CONCLUSION It is concluded that the main contribution of this paper lies in an unprecedented framework for a computer-based tool, aimed at guiding designers when developing a sustainable and aesthetically pleasing OTCP packaging. The proposed framework and its implementation in a computer-based tool to guide OTCP packaging designers contributes to a holistic approach in the task clarification design phase. The novel aspect of the ASSIST-OTCPP 52 framework is that it presents the user with a stakeholder-focused design approach to produce sustainable and aesthetically pleasing OTCP packaging. ASSIST-OTCPP 52 could also be used for improvement and the design of new and existing OTCP packaging. The knowledge contained in the framework provides insight on stakeholder requirements in the form of emotions and preferences and OTCP packages on the market. This was gathered from surveys with Maltese pharmacists and consumers. In future work, investigating how these preferences vary across cultures might prove important. DATA AVAILABILITY STATEMENT Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
2023-07-11T16:03:36.691Z
2023-07-04T00:00:00.000
{ "year": 2023, "sha1": "fef671b372197cfcc5fd792729b8b7d4910bf317", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pts.2763", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "576202237a1222dc53456be30a0659d87ee62865", "s2fieldsofstudy": [ "Environmental Science", "Business" ], "extfieldsofstudy": [] }
12230645
pes2o/s2orc
v3-fos-license
Adapting Free, Prior, and Informed Consent (fpic) to Local Contexts in Redd+: Lessons from Three Experiments in Vietnam Free, prior, and informed consent (FPIC) is a means of ensuring that people's rights are respected when reducing emissions from deforestation and forest degradation, and enhancing forest carbon stocks (REDD+) projects are established in developing countries. This paper examines how FPIC has been applied in three projects in Vietnam and highlights two key lessons learnt. First, as human rights and democracy are seen as politically sensitive issues in Vietnam, FPIC is likely to be more accepted by the government if it is built upon the national legal framework on citizen rights. Applying FPIC in this context can ensure that both government and citizen's interests are achieved within the permitted political space. Second, FPIC activities should be seen as a learning process and designed based on local needs and preferences, with accountability of facilitators, two-way and multiple communication strategies, flexibility, and collective action in mind. Introduction Human rights and rights-based approaches have increasingly influenced international climate change debates and decision-making [1,2] and more specifically have been applied to the still negotiated international mechanism aiming at reducing emissions from deforestation and forest degradation and enhancing forest carbon stocks (REDD+) in developing countries.REDD+, to a large degree, depends on the willingness of local communities to engage in forest protection.It is thereby assumed that providing secure rights and control over the resources to local communities might lead to more effective implementation [3].On the other hand, some see REDD+ as another attempt to take away control over resources and could lead to recentralization of forest governance, exclusion of local people from decision-making, and displacement from forest land held by indigenous groups who are denied access to traditional use of natural resources [4][5][6].Safeguards, which are mechanisms to mitigate risks and potential negative impacts of REDD+, therefore need to be in place [7]. A rights-based approach can therefore be useful to provide "benchmarks of acceptable outcomes based on widely agreed principles and legal structure" [8] (p.23).Article 15 of the Convention on Biological Diversity states: "Access to genetic resources shall be subject to prior informed consent of the Contracting Party providing such resources, unless otherwise determined by that Party"."Article 10 of the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP) also states that indigenous peoples shall not be forcibly removed from their lands or territories.No relocation shall take place without the free, prior, and informed consent of the indigenous peoples concerned and after agreement on just and fair compensation and, where possible, with the option of return. Translating the concept from the international to the national policy arena, however, is highly complex [9], as it must be adapted to diverse sociopolitical context.As interest in REDD+ grows so do concerns about how it will affect rural communities in general and indigenous people in particular [10][11][12]. Under the United Nations Framework Convention on Climate Change (UNFCCC), parties to the Convention have agreed to a set of seven safeguards (UNFCCC Decision 1/CP.16) to be promoted and supported during REDD+ implementation, known as the Cancun safeguards.Four of these seven UNFCCC safeguards are related to social safeguards and three of them are: (i) "Transparent and effective national forest governance structures, taking into account national legislation and sovereignty"; (ii) "Respect for the knowledge and rights of indigenous peoples and members of local communities, by taking into account relevant international obligations, national circumstances and laws, and noting that the United Nations General Assembly has adopted the United Nations Declaration on the Rights of Indigenous Peoples"; and (iii) "The full and effective participation of relevant stakeholders, in particular indigenous peoples and local communities." Although the term "FPIC" (Free Prior Inform Consent) is not explicitly referred to in the Cancun Agreements or in the appendix on REDD+ safeguards, FPIC is addressed indirectly because the text notes that the General Assembly has adopted UNDRIP and is particularly grounded in the second paragraph of the safeguards that calls for: "The full and effective participation of relevant stakeholders, in particular indigenous peoples and local communities.". FPIC is not new and has been previously applied in development, resource extraction, oil and gas exploitation, and other investment projects within territories of indigenous peoples [13].Free means that consent is given freely and voluntarily, with no coercion, manipulation, or intimidation and following a process directed by the community, respecting the time requirements of indigenous consultation/consensus processes.Prior means that consent is to be sought in advance of any activities, at the early stages of development.In other words, the process of FPIC should be initiated sufficiently in advance of commencement or authorization of activities, taking into account indigenous people's own decision-making processes, in phases of assessment, planning, implementation, monitoring, evaluation, and closure of a project.Informed means that communities have been provided with complete information and understand the potential impact.Although there is no functional clarity about what constitutes "consent" [14], in general "consent" is understood as collective decision-making [15,16]. Yet, the concept of FPIC itself is still new in the REDD+ arena [17].No definition is universally accepted [18] and there is often a gap between international norms and actual practice in different countries [13].Furthermore, there is so far no common understanding on how to integrate all parts of FPIC: the elements of free, prior, and informed consent; the links between processes and outcomes; and the requirement that FPIC is employed at certain points in time during a REDD+ activity [19,20].No unified method is available.FPIC is interpreted by the UN-REDD program (hereafter referred to as UN-REDD), as "the right of indigenous people to give or withhold their (indigenous people) free, prior, and informed consent to actions by others, that affect their land, territories, and natural resources" [15,21].FPIC is also interpreted by many international scholars as the rights of indigenous people to exercise their right of self-determination under international human rights law instruments such as the International Covenant on Civil and Political Rights [18,22,23].Any REDD+ initiatives are thus required "to ensure that indigenous peoples are not coerced or intimidated, that their consent is sought and freely given prior to the authorization or start of any activities, that they have full information about the scope and impacts of any proposed development, and that ultimately their choices to give or withhold consent are respected" [24].UN-REDD also developed the FPIC guidelines to be used by partner countries and applied during national-level activities [25].The guidelines outline FPIC criteria and propose a step-wise approach detailing what is required from partner countries to meet their extant commitments under a number of international agreements, including ILO Convention 169, UNDRIP, and UNCERD [15].The guidelines also distinguish consent from mere consultation, specifying that FPIC is meant to enable communities to participate in decision-making processes and withhold their consent [15]. While UN-REDD adopts a rights-based approach in FPIC implementation, the Forest Carbon Partnership Facility (FCPF) chooses a different approach.The Facility is subjected to the World Bank's operational policy on Indigenous Peoples (OP/BP 4.10) which requires that the development process fully respects the dignity, human rights, economies, and cultures of indigenous people, and if it cannot be avoided, adverse effects on indigenous people should be "minimized, mitigated, or compensated" [26].The OP/BP 4.10 also requires that if the proposed projects affect people, the bank requires borrowers to engage in Free Prior Informed Consultation and to achieve the "broad community support" of the affected indigenous peoples before the bank will provide financial support.The fact that FPIC is only required in some circumstances by the FCPF might engender a chasm with the UN-REDD.Moreover, the FCPFs only require "consultation" with exchange of information and views rather than establishing procedural rights to participation and access to information and creating an enabling environment for participations as required by "consent" under the Cancun safeguards.Furthermore, different national and regional governments apply different principles, which are often further complicated by donor requirements, NGO perceptions, and local demands. Although there is a host of FPIC experience from other forest and environmental governance arrangements that can provide lessons for REDD+ [27], few resources are available to train practitioners on implementation of FPIC for REDD+ [18].Despite the UN-REDD guidelines, applying FPIC is difficult due to (i) weak understanding of how FPIC is adopted in different political and social contexts and the institutional arrangements required for FPIC [28]; (ii) lack of experience, with relatively few initiatives undertaken so far [22]; (iii) the procedural norms of FPIC, which tend to yield unexpected and ambiguous results [29]; and (iv) subjective understanding of the terms and requirements of FPIC, influenced by both cultural interpretations and political interests [17].Further systematic and critical empirical research, including comparative case studies looking at FPIC and FPIC-like regimes operating around the world, could offer some useful insights [28]. In its implementation, FPIC is often interpreted as "the establishment of conditions under which people exercise their fundamental right to negotiate the terms of externally imposed policies, programs, and activities that directly affect their livelihoods or wellbeing, and to give or withhold their consent to them" [19,30] (p.20).However, the extent to which people can exercise their rights and influence decision-making, and the ways in which messages are communicated in a country depend on the national political setting and the local context [31].Therefore, understanding the political environment in which FPIC is operating is crucial.In addition, there will be a difference between whether the right to FPIC is simply the result of its acknowledgment of international laws and standards on REDD+ (such as those of the UN-REDD) or of it is part of existing domestic legal frameworks or included in legal reforms that states may carry out as part of the REDD-readiness process. In practice, FPIC rarely lives up to its stated ideals [22] and is often at risk of being seen simply as procedural, that is, followed mechanically without any consideration of the local context [32].Whether FPIC is merely a procedural guarantee or not has also received increasing attention by many scholars [33].Ideally, FPIC should be treated as a long-term learning process [19] and the political, economic, and social context needs to be carefully analyzed and taken into account in the FPIC design process [13].It is therefore important to learn from FPIC pilot studies that have been conducted in different contexts.Given that many countries are still at a very early stage of understanding what FPIC is and how it can be integrated into their national REDD+ strategies, it is timely for countries to share their experiences with one another in order to facilitate learning on FPIC [25]. In this paper, we use case studies from Vietnam to offer lessons and recommendations for putting FPIC principles into practice effectively and efficiently in a variety of social and economic settings.REDD+ is still in its early stages in Vietnam, and like many other countries in the world, most REDD+ activities are pilot projects that have been under preparation for several years, but have not yet reached a full stage of implementation with transfer of carbon.Therefore, we focus on lessons learned from implementing FPIC, with special attention to how information is conveyed in the initial phases of REDD+ pilot projects in Vietnam.Information is also a necessary premise for participation and is part of well-established national and international laws, for example Principle 10 of the Rio Declaration.Furthermore, participation is explicitly included in the UNFCCC safeguards for REDD+, as stated above.We also discuss the challenges faced by the government and project managers to implement effective FPIC and argue that FPIC is likely to be more successful and accepted by the government of Vietnam if it is built upon the national regulations framework on citizen rights.The next section provides a description of the political context of REDD+ and FPIC in Vietnam.We then describe the research methods in Section 3.This is followed in Section 4 by a presentation and discussion of the findings. The Political Context of FPIC in Vietnam FPIC is a politically sensitive issue in many countries, which are reluctant to recognize the collective right of indigenous peoples to self-determination out of fear that it could threaten state sovereignty and lead to an escalation in claims for independence by indigenous peoples [20,28,34].There is, however, a difference between internal self-determination (indigenous people have the rights to choose their political allegiances, to influence the political order in which they live, and to preserve their cultural, ethnic, historical, or territorial identity) and external self-determination (indigenous people have the right to determine their future international status and liberate themselves from existing rules, or the creation of an independent state); FPIC refers to the first and not to the second [35,36].Moreover, many states engaged in REDD+ implementation have also ratified global and regional human rights treaties [37] that require them to respect and take positive measures to fulfill rights and protect subjects within their jurisdiction against violations carried out by third parties [38].The right to FPIC, therefore, is directly linked to a state's obligation to uphold indigenous people's rights in the pursuit of their political commitment to those international treaties [30]. In the context of Vietnam, FPIC components are interpreted as follows [39]: Free means that stakeholders, particularly the local people, are entitled to participate freely, without any force or pressure.Prior means stakeholders are informed and consulted before a proposed project or activity that may impact them is commenced and thus before they may raise their voice against the project.Informed: stakeholders are entitled to be clearly and adequately informed of any possible impact (positive, negative, risk) of a proposed project or activity that may impact them.Information provided to each stakeholder has to indicate the reasons, the nature, the limitations, the scope, the scale, and the schedule as well as the possibility to retreat from any proposed project or activity that has developed for a specific site that may be impacted economically, socially, culturally, and environmentally. Vietnam ratified the International Covenant on Civil and Political Rights in 1982, and thereby recognized the rights of its ethnic minorities.The term "indigenous people" is not used in Vietnam as it is perceived as a product of colonialism.With the collapse of colonialism, the Vietnam state referred to indigenous people as "ethnic minorities," indicating their minority status against the Kinh majority [40,41].Nevertheless, in 2007, Vietnam ratified the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP) and in 2009 became the first country to implement FPIC under the UN-REDD program. Vietnam's recognition of the rights of citizens and human rights is guaranteed in its successive constitutions.The 1946 Constitution affirmed that citizens' freedom and democratic rights are guaranteed, including freedom of speech, the right to referendum, and the right to participate in state and political affairs.The 1960 Constitution extended the contents of citizens' rights, paying special attention to the rights of vulnerable groups and economic, social, and cultural rights.The 1980 Constitution reaffirmed the citizens' fundamental rights and made the state responsible for guaranteeing these rights.Article 50 of the 1992 Constitution established "human rights" for the first time in terms of both definition and content.The Grassroots Democracy regulation (Decree No. 79/2003/ND-CP, Decree No. 29/1998/ND-CP, Resolution No. 45, 1998/NQ-UBTVQH10) in 1998 was considered an important milestone towards achieving democracy and citizens' participation in policy-making [42,43] as it required local governments be more transparent and democratic, consult and monitor, and encourage people to actively take part in social and public management [44].In 2013, a new constitution elevated the provisions on human rights to the political sphere as evidenced by the use of "human rights and citizens' rights and obligations" instead of "fundamental rights and obligations of citizens" as was the case in all previous constitutions.In addition, the 2013 Constitution recognizes a number of additional rights, including the right to life, the right to culture, and the right to live in a clean environment.The Constitution also asserts: "the rights and the exercise of human rights and citizens' rights may not be abused to infringe upon national interests and others' lawful rights and interests" (Article 15). In practice, however, the Vietnam Communist Party (VCP) remains dominant, the ultimate source of decisions, limiting political freedom and constraining the autonomy and political participation of Vietnamese citizens [44,45].The official perception of democracy in Vietnam is strongly influenced by Marxist-Leninist ideologies of a centralized democracy whereas Western liberal democracy is depicted in a rather negative way [46].Thus democracy and human rights remain "sensitive" and the public space remains limited with unclear boundaries [47].Neither separation of powers nor rule of law, free press, democratic elections, or effective opportunities for people to participate to the national level policy-making exist [42].Only party members can exercise the right of freedom of expression, although they are still limited by the party's strong emphasis on individual duties for public or national interests [48].Opposition or different ideas outside the VCP are not permitted, but within its borders, conflicting views on the extent of social and economic freedoms exist [49]. Although the Grassroots Democracy regulation emphasizes the citizen rights of being informed, being consulted, as well as their roles in supervising and deciding, these are often mere slogans and people lack the capability of making their voices heard even at the commune level [50].Old structures, resistance to change by local politicians, mistrust, and officials' lack of knowledge and skills have hampered the transition towards a more open system of governance [46,48], and there has been no significant change on the behavior of governmental staff [50].This particular political context constitutes a challenging environment for the implementation of FPIC. Comparative Analysis of Three Case Studies This paper looks at three case studies in Vietnam.Selection was based on the use of FPIC, social and political representation, and the availability and willingness of local authorities, project proponents, and local communities to participate in the study.The case studies also aim to represent different group of project proponents (local CSOs in Thai Nguyen, an intergovernmental organization in Lam Dong, and researcher communities in Nghe An).All three cases studies are pilot projects with a clear objective of experimenting different approaches to draw lessons for REDD+ practitioners and policy makers and are at a very early stage of development.Therefore, we only focus on the first three elements of FPIC and draw lessons learned based on activities implemented so far.A brief description of each of the case study projects is provided below.The aim of the project was to identify locally based and adapted REDD+ approaches that engage communities in implementing FPIC and REDD+ by themselves with the support of sustainable funding.The ethnic minorities are expected to be the initiators for pilot REDD+ activities.The project began by raising awareness about climate change and REDD+ among district, commune, and village authorities and villagers.The program included discussion and training on the drivers of deforestation and forest degradation, the role of natural forests, the condition of forests and carbon stocks, requirements under REDD+ such as the Cancun safeguards, and result-based payments.Consultation with district and commune authorities on institutions and aspects of REDD+ implementation, particularly benefit sharing mechanisms and measuring, reporting, and verification (MRV), were followed by consultation with villagers and village heads.Project participants also took part in training workshops on democratic rights (as set out in the Grassroots Democracy Ordinance) and alternative livelihood options. I-REDD+, Nghe An Province The I-REDD+ project (Impacts of Reducing Emissions from Deforestation and Forest Degradation and Enhancing Carbon Stocks) is somewhat different from the other two as it is a research project, with activities in Con Cuong District in Nghe An Province.The aim of I-REDD+ is to ensure that future REDD+ mechanisms are based on the highest level of knowledge about carbon storage in complex landscape mosaics, monitoring technology, impacts on local livelihoods, and governance structures for managing payments and benefit sharing.In one component of the project, a combination of role-playing games and 3D modeling of the village landscape were used to engage village communities in an analysis of the possible impacts of REDD+ on carbon sequestration, livelihood changes, food security, and ecosystem service provision [51].A large range of participatory methods were tested with local people during two fieldtrips in 2012.The aims of these two fieldtrips were to understand the village history and its influence on local livelihoods, assess the opportunity costs of land-use changes, identify issues related to natural resources management, and rank local priorities for development.As part of the research activities, village communities used simulations to assess the possible impacts of recent land-use changes on their livelihoods and then explore collectively various REDD+ implementation scenarios.These participatory research activities were considered to be a test of how the "informed" component of a future FPIC process may be implemented. Survey Methods We conducted six focus group discussions (in two villages in Lam Dong, two villages in Nghe An, and two villages in Thai Nguyen) with a total of 75 participants.In these groups, villagers discussed the strengths and weaknesses of the content, process, and dissemination methods proposed during each project and looked at ways to deal with the pitfalls, to better capture local preferences and choices.In addition we conducted open-ended interviews with central government staff (four), provincial government staff and with non-governmental organizations (NGOs) (10 informants in Thai Nguyen, six in Nghe An, and six in Lam Dong).Household interviews were also conducted with 42 randomly selected villagers who had participated in project activities and who were available and willing to participate in the study.The aim of these interviews was to learn how stakeholders evaluated approaches of each project to informing villagers about REDD+, their perceptions and understanding of REDD+, and the pros and cons of the method used for implementing FPIC. FPIC as a Rights-Based Approach Embedded in the National Legal Framework Although the government has developed supportive democracy policies that clearly indicate the rights of citizens and communities, there is a reluctance to give more rights and power to local communities.Vietnam is one of the first countries to pilot FPIC, as part of the UN-REDD Program [25], but FPIC is neither mentioned in the National REDD+ Program (Decision 799/2012 approved by the Prime Minister), nor has Vietnam developed specific guidelines or manuals.The absence of FPIC from the legal framework leaves it open for government staff as well as project proponents in the three cases to make their own interpretations of FPIC.Nevertheless, the government control remains firm.Though representatives of the central government asserted that citizens can exert their right to reject REDD+, the government can override their decision and impose a program if it is seen as crucial for national development.This perception is rooted in the strong political view that "free consent" must remain within legal and constitutional boundaries.Authorities of all government agencies interviewed also expressed their concerns about allowing people to exercise such rights, as they were not confident that local people had enough information on or understanding of REDD+ and similar mechanisms to be able to exercise their rights effectively. Lacking clear guidance on how to implement FPIC, project proponents in the three cases interpreted and implemented FPIC in different ways (Table 1) based on their project objectives and expected outcomes, leading to different understanding of FPIC by local authorities and local people (Table 2).The local CSO in Thai Nguyen was the only one explicitly linking FPIC to the grassroots democracy regulations and Vietnam's constitution.An interviewee of this local CSO asserted that "it is very challenging to implement FPIC in Vietnam because the concepts of human rights, citizen rights, and democracy are seen as politically sensitive and CSO cannot operate freely like in other countries.What we do will have to comply with or be based on national regulations.We read the principles of FPIC and see the only way to make FPIC work is to link to grassroots democracy regulations [if] local authorities are willing to uptake this idea."As a result, while central government representatives and authorities interviewed in Lam Dong and Nghe An were quite skeptical about FPIC implementation in their province, the Thai Nguyen case received strong political support and interest from both the central government and Thai Nguyen local authorities.A government representative in Thai Nguyen stated that "When we first heard about FPIC, we were very concerned as indigenous human rights and citizen rights are indeed very sensitive.However, when the project proponent explained that this is nothing new and that it complies with grassroots democracy regulations and the Vietnam constitution, we were much more confident and supportive of the project."A central government interviewee also stated that "although there is no clear guidance and common understanding on FPIC in Vietnam, the case of Thai Nguyen shows how applying international requirement to national contexts facilitates government commitment.Other approaches adopted in other REDD+ project are often seen as exogenous models unrelated to national regulations.Conversely, project proponents had more problems introducing and getting acceptance of FPIC among local authorities."Using the permitted political space to implement FPIC and making use of the actual citizen's rights help local government authorities to accept, understand, and therefore support the implementation of FPIC as part of their daily work and assignments. As was shown in Thai Nguyen and also suggested by Doyle [30], FPIC can easily include information on national legislation, including the degree of recognition of the right to FPIC and other self-determination rights.Indeed, to ensure that FPIC is well-placed and well implemented at all scales, legally binding obligations and clear domestic legislation modeled after international norms are required [13].In Thai Nguyen, the project not only informed people about the content of the REDD+ project but also provided training on grassroots democracy, legal rights (as set out in Vietnam's Law on Complaints and Denunciations, and Law on Cooperatives), and the rights and responsibilities of village leaders (Table 2).By contrast, in the case of the UN-REDD Program the FPIC team leaders were apparently familiar with national guidelines and documents on grassroots democracy, but there was no evidence that these guidelines had been incorporated into the FPIC process [52].Yet, even with the political support and acceptance from government staff such as in the case of the Thai Nguyen project, operating FPIC remains challenging.Government control remains strong and political space for participation is limited.In Vietnam, no project activity is allowed without the presence of authorities.Freely given consent "with no coercion, manipulation, or intimidation" through a community-driven process remains elusive and project proponents are limited in experimenting with different approaches when introducing REDD+ to the villagers. The selection of facilitators is also politically driven.In Thai Nguyen, the project proponents interviewed said they preferred to select villagers who already had communication skills, as they believed that these independent facilitators would be free of any political interests.However, the commune authority obliged the project proponents to nominate the local first secretary of the Communist Party as facilitator as a condition for the project to be approved.According to all villagers interviewed in Thai Nguyen, these facilitators, as party members, were unable to create an environment in which villagers felt comfortable enough to discuss their ideas freely.All villagers interviewed also claimed that the village leaders (nominated by the state) in all three sites often attempted to control the participatory process and to prevent villagers' discussions of the facilitators' selection process.In all three cases studies, project implementers and local authority representatives stated that the absence of local government officials allowed a more open environment in which local people can speak out.However, when government officers are not involved, they are unlikely to support (or sometimes even approve) consultations at village level. Baker [18] suggested that the appropriate way to achieve FPIC is to first agree on key principles of an overall framework and then consider context-specific aspects once designs are further advanced and locations are determined.As stated earlier, the government of Vietnam has indirectly committed to FPIC, partly because the government signed up for international and regional human rights treaties, and partly because FPIC is in line with the existing Vietnam Constitution and Grassroots democracy regulation.As the three projects have demonstrated and as the Vietnam Constitution and Grassroots democracy regulation requires, stakeholders entitled to FPIC will have to include both indigenous people and wider communities.The inclusion of FPIC in the national REDD+ program, together with guidance on operational principles, could serve as an obvious legal platform for FPIC implementation.Better understanding of FPIC principles on the part of central and provincial governments would also help to embed those principles in practice. FPIC as a Learning Process and Empowerment Tool As indicated above, in each of the three sites, different ways of implementation were used and different information was conveyed, which resulted in different perceptions.On the other hand, in all cases, the "Informing" stage was done through indoor meetings (most often at the house of a village leader or the community hall) (Table 3).Interviewees in Lam Dong and Nghe An pointed out that holding the activities outdoors would not only improve explanations about the linkages between forests, climate change mitigation, and their livelihoods, but also reduce the stress of being intensively put in study mode.With meetings held indoors, information was conveyed mostly through conventional approaches like Powerpoint presentations and handouts.These were not very effective in creating open and free discussions.Only in Nghe An, project proponents introduced role-playing for land-use simulations, which helped build trust between villagers and facilitators and minimize the influence of powerful actors.As implied, communicating REDD+ requires a mix of methods and strategies, from verbal to visual and from normative to affective.However, facilitators and communicators can only be sure that their messages will be understood if they, in turn, understand their audiences, including their values, fears, and hopes, and the context in which the communication is taking place [53].The learning process should thus be a two-way process whereby all parties participate in the learning. The three cases studied show that a knowledgeable and skilled facilitator was a determining factor for people's participation in project activities.However, different types of facilitators are perceived differently.In Thai Nguyen and Lam Dong, there were two groups of facilitators: (i) the core group, including REDD+ experts, who came from the project; and (ii) local facilitators who were selected and trained by the core group.Both of these groups carried out consultations in the same communities.All interviewees said that they preferred consultations with the external experts over consultations with local facilitators, for two main reasons.First, trained local facilitators can deliver basic information on the topic but cannot engage in two-way communication because they do not have sufficient knowledge to respond to questions.Second, villagers felt that local facilitators are not neutral, even if they speak the local language.The perception is that, in a community made up of three or four ethnic groups, local facilitators might be biased toward a certain group and cannot be neutral when facilitating the discussion.Similarly, in Nghe An, villagers interviewed reported that they had enjoyed having external expert facilitators with no association with local issues or organizations. Learning also requires ownership of the local people in the process.However, according to the villagers interviewed in all three provinces, local people had limited influence on how and about what they were to be informed.Project proponents made decisions about the training content and methods without consulting community members about their needs or preferences.Although this is attributable, at least in part, to the limited time and resources available for designing the training, it also implies that the project proponents did not have sufficient information to be able to effectively tailor their training activities to the interests and capacity of the trainees.The process should not only be about informing but also about consulting, discussing, and learning. Given the limitations of time and financial resources that most projects face, trade-offs must be made between the need to include all villagers in the process and the difficulties of training large disparate groups.Inviting a smaller group to take part in the learning process may be more efficient if participants are selected carefully and appropriate mechanisms for passing on the information are in place.Indigenous peoples should be able to participate through their own freely chosen representatives and customary institutions.The inclusion of a gender perspective and the participation of indigenous women are essential, as well as the participation of children and youth as appropriate. Learning for empowerment, however, is not only a matter of providing information but needs to be accompanied by efforts to help communities understand the issues.It requires an inclusive and equitable dialogue, allowing all stakeholders to develop appropriate solutions in an atmosphere of mutual respect, and thus requires ample time and an effective system for communicating among stakeholders.Preparations by I-REDD+ included several field trips before the focus group discussions, and the CERDA project included a thorough two-year baseline study, which enabled these two projects to tailor their approaches to local interests.By contrast, the project in Lam Dong showed that more conventional approaches to communicating REDD+, with a relatively narrow focus on scientific findings, synthesis reports, and descriptions of extreme weather events, tend to remain rather abstract for local people.Moreover, local villagers interviewed in Lam Dong also claim that 2 h of discussion (see Table 1) is not enough for local communities to understand REDD+. As a learning process, targeted stakeholders including both local government and local people must be provided with accurate and comprehensive information about REDD+.The information provided also needs to address the potential positive and negative impacts of REDD+.Villagers in Lam Dong and Thai Nguyen claimed they were only told about the positive impacts of REDD+.Only people in Nghe An were informed of the potentially negative impacts.In Lam Dong in particular, people were not informed about the risks associated with taking part in the program or of the costs they might incur [52].As a result, it has raised unrealistic expectations that the new projects would provide them with alternative livelihood options, better forest protection, and development opportunities.Although these perceptions fueled people's interest in taking part, it is essential to avoid creating unrealistic expectations among local people. Feedback was also minimal.Interviewees in the three sites highlighted that although much information was provided and agreement was reached on how REDD+ would be implemented, in none of the study sites were local people given the minutes or any other records of the meetings.In Lam Dong the minutes were read back to attendees at the end of the village meeting, but in Nghe An, minutes were taken for research purposes but not shared with local people (as the project was not actually implementing REDD+).In Lam Dong, the meeting proceedings were recorded but not in great detail.Yet, both local authorities and villagers interviewed in the three sites expressed their strong interest to be provided such information for future use. Policy makers often assume ordinary people lack full knowledge and understanding of climate change and therefore are poor decision makers in need of expert support [54,55].Nerlich et al. [53] criticize this tendency to differentiate between expert and non-expert, claiming that communication should be grounded in dialogue and contextual understanding.Therefore, communication should be seen as part of collective action [56] and social learning [57] and needs to be based on a better understanding by facilitators not only of the target audience but also of how to engage them affectively, connecting the messages to cultural values and beliefs [31,54].Furthermore, as the content and availability of accurate and clear information on REDD+ are still evolving, there are inherent difficulties in informing communities of details about which neither local people nor most project staff have a firm understanding [19,52,58].FPIC should therefore be understood as a long-term learning process [19] and the political, economic, and social context needs to be carefully analyzed and taken into account in the FPIC design process [13]. As highlighted by the case studies, FPIC, if designed and implemented well, can be an effective learning tool to empower local communities and enhance their participation in development and design of REDD+ [59][60][61][62] and thereby address some of the underlying social drivers of deforestation [63].The concept of FPIC built on the national legal framework might not be new to local government and local people, but certainly the practice has not been exercised widely.Thus, it should be seen as a learning process for both local people and local authorities.FPIC regimes must therefore be set up in such a way that they encourage productive and informed engagement of local people [28].Given the great disparities in power and resources between the actors involved, FPIC might run the risk of constantly reinforcing and legitimizing the dominant actors.The details of procedural norms (e.g., who will participate, how long the consultation will last, what type of compensation should be made) therefore need to be considered and designed carefully [29].The format in which information is conveyed should take into account social, institutional, and cultural barriers [25,31,58].Communication and consultation processes must be culturally appropriate, with information provided in the appropriate language. Conclusions FPIC has evolved gradually, and is the result of both hard and soft legal norms at international and national levels.Yet, there is a gap between international norms and national practice, due to specific political and economic conditions in each country.How FPIC is translated on the ground depends on political views, government interests, and the local governments' understanding of FPIC.In line with Savaresi [38], this paper has shown that integrating REDD+ with human rights obligations would avoid duplicating efforts and exploit the consensus that already underpins existing human rights instruments.As such, it follows that the more rights-based approaches to conservation efforts are successful where secure rights to resources underpin community engagement [64].Our findings show that framing FPIC within the human rights and grassroots regulations will provide the added benefit of institutional support to better implement and enforce FPIC. Our paper also shows that political regimes (e.g., Vietnam's command and control system) may undermine the implementation of FPIC on the ground if interpretations of the elements "free", "prior", and "informed consent" do not adhere to the intentions of FPIC.The unwillingness of the political elite to transfer decision-making power from state to non-state actors has strong implications for access to and control over resources and the understanding of what FPIC means.Better information to local authorities about REDD+ and the role of FPIC as embedded in national policies and context can help to move FPIC and REDD+ forward, but this also depends on the willingness of governments to provide political space for other actors. FPIC should also be treated as a learning process; the information provided should be useful for participants and the ways information is provided should be accommodated with adequate venues and accountable and independent facilitators.Sufficient timing and budget is also required for careful implementation.Consultations take place within a highly dynamic and complex political and socioeconomic context.As seen in these case studies from Vietnam, no single approach will fit all situations.Informing local communities about REDD+ is a complex and challenging task because of the nature and impacts of REDD+ itself, the range of knowledge needed to respond to it, and the ability of facilitators to ensure that learning processes are both dynamic and accountable.Given the diversity of local socioeconomic settings, FPIC guidelines need to be flexible enough to be adaptable to national and local contexts, where legislation must acknowledge that FPIC is an adaptive learning process focused on enhancing stakeholders' engagement in REDD+. 3.1.1.UN-REDD Vietnam Program, Lam Dong Province FPIC was piloted in 2010 in 78 villages in the districts of Lam Ha and Di Linh in Lam Dong Province, as part of the UN-REDD Vietnam Program.However, the project practitioners had neither the experience nor clear guidance on how to conduct an FPIC pilot.In theory, the pilot FPIC process involved nine steps: (i) preparation; (ii) consultation with local officials; (iii) recruitment of local facilitators; (iv) training of local facilitators; (v) awareness raising; (vi) village meeting; (vii) recording the decision; (viii) reporting to UN-REDD Vietnam; and (ix) verification and evaluation.In practice, awareness raising and the village meeting were combined, making it an eight-step process.The aim of this exercise was to gain some experience to guide the future national implementation of FPIC.3.1.2.Pilot Project to Build Community Readiness for REDD+, Thai Nguyen Province This project was launched in the Dai Tu and Vo Nhai districts in Thai Nguyen Province in 2010 and has been managed since 2011 by the Center of Research and Development in Upland Areas (CERDA), a local CSO. Table 1 . Implementation of FPIC in three projects. Table 3 . How and where information was provided at the three sites.
2015-09-18T23:22:04.000Z
2015-07-15T00:00:00.000
{ "year": 2015, "sha1": "d6fbd9f1e2e2f95c763c825c21c7cdf74c105e73", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4907/6/7/2405/pdf?version=1437028128", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "d6fbd9f1e2e2f95c763c825c21c7cdf74c105e73", "s2fieldsofstudy": [ "Environmental Science", "Political Science" ], "extfieldsofstudy": [ "Business" ] }
5032993
pes2o/s2orc
v3-fos-license
Control your anger ! The neural basis of aggression regulation in response to negative social feedback Negative social feedback often generates aggressive feelings and behavior. Prior studies have investigated the neural basis of negative social feedback, but the underlying neural mechanisms of aggression regulation following negative social feedback remain largely undiscovered. In the current study, participants viewed pictures of peers with feedback (positive, neutral or negative) to the participant’s personal profile. Next, participants responded to the peer feedback by pressing a button, thereby producing a loud noise toward the peer, as an index of aggression. Behavioral analyses showed that negative feedback led to more aggression (longer noise blasts). Conjunction neuroimaging analyses revealed that both positive and negative feedback were associated with increased activity in the medial prefrontal cortex (PFC) and bilateral insula. In addition, more activation in the right dorsal lateral PFC (dlPFC) during negative feedback vs neutral feedback was associated with shorter noise blasts in response to negative social feedback, suggesting a potential role of dlPFC in aggression regulation, or top-down control over affective impulsive actions. This study demonstrates a role of the dlPFC in the regulation of aggressive social behavior. Introduction People are strongly motivated to be accepted by others and to establish a sense of belonging.Receiving negative social feedback, therefore, is a distressing experience, related to serious negative consequences such as feelings of depression and anxiety (Nolan et al., 2003).For some individuals, receiving negative social feedback can result in aggression toward people who have negatively evaluated or rejected them (Twenge et al., 2001;Leary et al., 2006;DeWall and Bushman, 2011;Chester et al., 2014;Chester and DeWall, 2015;Riva et al., 2015).However, the relation between negative social feedback and subsequent aggression is not well understood.In the current study we investigated the relation between receiving negative social feedback and subsequent aggression using neuroimaging, which allowed us to (i) examine the neural correlates of negative social feedback relative to neutral or positive feedback, (ii) examine aggressive responses toward the person signaling negative social feedback, and (iii) examine the association between the neural correlates of negative social feedback and behavioral aggression. Social rejection and negative social feedback have previously been studied using a variety of experimental paradigms that manipulate social contexts.For example, the negative feelings associated with social rejection have been extensively studied using Cyberball, an online ball tossing game in which three players toss balls to each other, until at some point in the game, one of the players is excluded.It is consistently found that this type of social exclusion leads to feelings of distress, negative mood and a decreased satisfaction of the need for a meaningful existence (Williams et al., 2000;Williams, 2007).Neuroimaging studies point to a role of the midline areas of the brain, specifically the dorsal and subgenual anterior cingulate cortex (ACC), as well as the anterior insula, as important brain regions responding to social exclusion (Cacioppo et al., 2013;Rotge et al., 2015).Other studies have used a peer feedback social evaluation paradigm to study responses to both positive and negative social feedback.In such paradigms, participants believe that they are socially evaluated by same-aged peers, based on first impressions of their profile picture (Somerville et al., 2006;Gunther Moor et al., 2010;Hughes and Beer, 2013).These studies showed that dorsal ACC (dACC) activation was particularly activated in response to unexpected social feedback, irrespective of whether this was positive or negative (Somerville et al., 2006), whereas ventral medial prefrontal cortex (mPFC) and ventral striatum activation was larger for positive feedback compared with negative feedback (Guyer et al., 2009;Davey et al., 2010;Gunther Moor et al., 2010). More insight into the neural and behavioral correlates of social evaluation and rejection has been derived from studies testing the relation between social rejection and subsequent aggression.One study combined the Cyberball task in the scanner with a subsequent aggression index using a noise blast task outside of the scanner (Chester et al., 2014).Individuals responded more aggressively following the experience of social rejection, but intriguingly, these effects were dependent on whether the participant showed low or high executive control.Participants who scored high on executive control displayed lower aggression after social rejection, suggesting that executive control abilities may down-regulate aggression tendencies.It has been suggested that self-control relies strongly on the lateral prefrontal cortex (PFC), which is thought to exert top-down control over subcortical, affective, brain regions (such as the striatum) to suppress outputs that otherwise lead to impulsive response and actions (Casey, 2015).Transcranial magnetic stimulant studies have indeed implicated a causal role for the lateral PFC in executing self-control when choosing long-term rewards (Figner et al., 2010).Similarly, lateral PFC may have an important role in down-regulating aggression following rejection or negative social feedback.This hypothesis finds support in a study where participants had the opportunity to aggress to peers who had excluded them during Cyberball while undergoing transcranial direct current stimulation (tDCS) (Riva et al., 2015).tDCS of the right ventrolateral PFC reduced participants' behavioral aggression to the excluders. Taken together, prior studies suggested an important role of dorsal and ventral mPFC regions in processing negative and positive social feedback, but the exact contributions of these regions are not consistent across studies and may depend on the experimental paradigm.The first goal of this study was to disentangle effects of positive and negative feedback in a social evaluation paradigm (Somerville et al., 2006).A novel component of this study relative to prior studies is that we included a neutral baseline condition, in which participants received neutral feedback on a subset of the trials.Based on prior research, we expected that positive social feedback would result in increased activation in the subgenual ACC (Somerville et al., 2006) and the ventral striatum (Guyer et al., 2009;Davey et al., 2010;Gunther Moor et al., 2010).In contrast, we expected that negative social feedback would be associated with increased activity in the dACC/dorsal mPFC (dmPFC) and the insula.Prior studies remained elusive about whether dACC/mPFC and insula activities were associated with salient events per se (Somerville et al., 2006) or social rejection specifically (Eisenberger et al., 2003;Kross et al., 2011).Therefore, we conducted conjunction analyses for both positive and negative feedback vs neutral baseline, as well as direct contrasts testing for differences between positive and negative social feedback. Importantly, there may be individual differences in how participants respond to negative social feedback, which may be associated with increased neural activity in lateral PFC, as has been found in social rejection studies (Chester and DeWall, 2015).The second goal of this study was therefore to examine how individuals respond to negative social feedback, and whether lateral PFC activity is related to aggression regulation following negative social feedback.Therefore, the paradigm included a second event where participants could directly retaliate to the peer who judged them, by sending a loud noise blast (Twenge et al., 2001;Chester et al., 2014).Noise blast duration was measured after each trial within the functional magnetic resonance imaging (fMRI) task and therefore we could examine how neural activity related to individual differences in noise blast duration.On a behavioral level, we hypothesized that negative social feedback would trigger reactive aggression, i.e. longer noise blasts (Twenge et al., 2001;Reijntjes et al., 2011;Riva et al., 2015).In addition, we hypothesized that less aggression (i.e. more aggression regulation, shorter noise blasts) would be related to increased activation in lateral PFC (Casey, 2015;Riva et al., 2015) particularly during negative feedback. Participants Thirty participants between the ages of 18 and 27 participated in this study (15 females, M ¼ 22.63 years, s.d.¼ 2.62).They were either contacted from a participant database or they responded to an advert placed online.The institutional review board of the Leiden University Medical Center (LUMC) approved the study and its procedures.Written informed consent was obtained from all participants.All participants were fluent in Dutch, right-handed, and had normal or corrected-to-normal vision.Participants were screened for MRI contra indications and had no history of neurological or psychiatric disorders.All anatomical MRI scans were reviewed and cleared by a radiologist from the radiology department of the LUMC.No anomalous findings were reported. Participants' intelligence quotient (IQ) was estimated with the subsets 'similarities' and 'block design' of the Wechsler Intelligence Scale for Adults, third edition (WAIS-III; Wechsler 1997).All estimated IQs were in the normal to high range (95-135; M ¼ 113.92, s.d. ¼ 9.23).IQ scores were not correlated to behavioral outcomes of the Social Network Aggression Task (SNAT) (noise blast duration after positive, neutral, negative feedback and noise blast difference scores, all P's > 0.244). Social Network Aggression Task The SNAT was based on the social evaluation paradigm of Somerville et al. (2006) andGunther Moor et al. (2010).Prior to the fMRI session, participants filled in a profile page at home, which was handed in at least 1 week before the actual fMRI session.The profile page consisted of personal statements such as: 'My favorite sport is . . .', 'This makes me happy: . . .', 'My biggest wish is . . .'. Participants were informed that their profiles were viewed by other individuals.During the SNAT participants were presented with pictures and feedback from same-aged peers in response to the participants' personal profile.This feedback could either be positive ('I like your profile', visualized by a green thumb up), negative ('I do not like your profile'; red thumb down) or neutral ('I don't know what to think of your profile', grey circle) (Figure 1a). Following each peer feedback (positive, neutral, negative), participants were instructed to send a loud noise blast to this peer.The longer they would press a button the more intense the noise would be, which was visually represented by a volume bar (Figure 1b).Participants were specifically instructed that the noise was not really sent to the peer, but that they had to imagine that they could send a noise blast to the peer, with the volume intensity of the participants' choice.This was done to reduce deception, and prior studies showed that imagined play also leads to aggression (Konijn et al., 2007).Unbeknownst to the participants, the profile was not judged by others, and the photos were taken from an existing data base with pictures matching participants' age range (Gunther Moor et al., 2010).Peer pictures were randomly coupled to feedback, ensuring equal gender proportions for each condition.None of the participants expressed doubts about the cover story. Prior to the scan session, the noise blast was presented to the participants twice during a practice session: once with stepwise buildup of intensity and once at maximum intensity.Two evaluation questions were asked after hearing the maximum intensity: 'How much do you like the sound?' and 'How much do you dislike the sound?'.Participants rated the sound on a 7point scale, with 1 representing 'very little' and 7 representing 'very much'.In order to prevent that pressing the button during the experimental task would punish the participants themselves, they only heard the intensity of the noise blast during the practice session and not during the fMRI session.To familiarize participants with the task, participants performed six practice trials. The SNAT consists of two blocks of 30 trials (60 trials in total), with 20 trials for each social feedback condition (positive, neutral, negative), that are presented semi-randomized to ensure that no condition is presented more than three times in a row.Figure 1c displays an overview of one SNAT trial.Each trial starts with a fixation screen (500 ms), followed by the social feedback (2500 ms).After another fixation screen (jittered between 3000 and 5000 ms), the noise screen with the volume bar appears, which is presented for a total of 5000 ms.As soon as the participant starts the button press, the volume bar starts to fill up with a newly colored block appearing every 350 ms.After releasing the button, or at maximum intensity (after 3500 ms), the volume bar stops increasing and stays on the screen for the remaining of the 5000 ms.Before the start of the next trial, a fixation cross was presented (jittered between 0 and 11 550ms).The optimal jitter timing and order of events were calculated with Optseq 2 (Dale, 1999). Exit questions Following the MRI session, three exit questions were asked: 'How much did you like reactions with a thumb up?', 'How much did you like reactions with a circle?' and 'How much did you like reactions with a thumb down?'.Participants rated the reactions on a 7-point scale, with 1 representing 'very little' and 7 representing 'very much'. MRI data acquisition MRI scans were acquired with a standard whole-head coil on a Philips 3.0 Tesla scanner (Philips Achieva TX).The SNAT was projected on a screen that was viewed through a mirror on the head coil.Functional scans were collected during two runs T2*weighted echo planar images (EPI).The first two volumes were discarded to allow for equilibration of T1 saturation effect.Volumes covered the whole brain with a field of view MRI data analyses Preprocessing.MRI data were analyzed with SPM8 (Wellcome Trust Centre for Neuroimaging, London).Images were corrected for slice timing acquisition and rigid body motion.Functional scans were spatially normalized to T1 templates.Due to T1 misregistration, one participant was normalized to an EPI template.Volumes of all participants were resampled to 3 Â 3 Â 3 mm voxels.Data were spatially smoothed with a 6 mm full width at half maximum isotropic Gaussian kernel.Translational movement parameters never exceeded 1 voxel (<3 mm) in any direction for any participant or scan (movement range: 0.001-1.22mm, M ¼ 0.055, s.d.¼ 0.036). First-level analyses.Statistical analyses were performed on individual subjects' data using a general linear model.The fMRI time series were modeled as a series of two events convolved with the hemodynamic response function (HRF).The onset of social feedback was modeled as the first event with a zero duration and with separate regressors for the positive, negative and neutral peer feedback.The start of the noise blast was modeled for the length of the noise blast duration (i.e.length of button press) and with separate regressors for noise blast after positive, negative and neutral feedback.Trials on which the participants failed to respond in time were marked as invalid.Note that his happened rarely, on average 3.78% of the trials were invalid.The least squares parameter estimates of height of the best-fitting canonical HRF for each condition were used in pairwise contrasts.The pairwise comparisons resulted in subjectspecific contrast images. Higher-level group analyses.Subject-specific contrast images were used for the group analyses.A full factorial analysis of variance (ANOVA) with three levels (positive, negative and neutral feedback) was used to investigate the neural response to the social feedback event.We calculated the contrasts 'Positive vs Negative feedback', 'Positive vs Neutral feedback' and 'Negative vs Neutral feedback'.To investigate regions that were activated both after negative social feedback and after positive social feedback, we conducted a conjunction analysis to explore the main effect of social evaluation.Based on Nichols et al. (2005), we used the 'logical AND' strategy.The 'logical AND' strategy requires that all the comparisons in the conjunction are individually significant (Nichols et al., 2005). All results were False Discovery Rate (FDR) cluster corrected (P FDR < 0.05), with a primary voxel-wise threshold of P < 0.005 (uncorrected) (Woo et al., 2014).Coordinates for local maxima are reported in MNI space.To further visualize patterns of activation in the clusters identified in the whole brain regression analysis, we used the MarsBaR toolbox (Brett et al., 2002) (http:// marsbar.sourceforge.net). Behavioral analyses Noise blast manipulation check.The ratings of how much participants liked the maximum intensity noise blast indicated that overall the noise blast was not liked (M ¼ 1.47, s.d.¼ 0.78; range 1-4) and much disliked (M ¼ 5.67, s.d.¼ 1.30; range 1-7).These results show that the noise blast was indeed perceived as a negative event by the participants. Social feedback manipulation check.To verify whether participants differentially liked the social feedback conditions (positive, negative, neutral), we analyzed the exit questions with a repeated measures ANOVA.Analyses showed a significant main effect of type of feedback on feedback liking, F(2, 58) ¼ 53.63, P < 0.001 (GG corrected), with a large effect size (x 2 ¼ 0.53).Pairwise comparisons (Bonferroni corrected) showed at Leiden University on December 8, 2016 http://scan.oxfordjournals.org/Downloaded from that participants liked negative feedback (M ¼ 3.13, s.d.¼ 0.14) significantly less than neutral feedback (M ¼ 4.23, s.d.¼ 0.14, P < 0.001) and positive feedback (M ¼ 5.23, s.d.¼ 0.16, P < 0.001).Participants also liked neutral feedback significantly less than that positive feedback (P < 0.001). Noise blast duration. A repeated measures ANOVA was performed on noise blast duration after positive, negative and neutral feedback.Results showed a significant main effect of type of social feedback on noise blast duration, F(2, 58) ¼ 75.57,P < 0.001 (GG corrected), with a large effect size (x 2 ¼ 0.41) (Figure 1d).Pairwise comparisons (Bonferroni corrected) revealed that noise blast duration after negative feedback (M ¼ 1517.08,s.d.¼ 126.94) was significantly longer than noise blast duration after neutral feedback (M ¼ 930.41; s.d.¼ 84.77, P < 0.001), and after positive feedback (M ¼ 483.62; s.d.¼ 47.19, P < 0.001).Noise blast duration after neutral feedback was significantly longer than after positive feedback (P < 0.001). To derive a measure indicative of individual differences in aggression, we calculated the differences in noise blast duration between negative vs neutral feedback and positive vs neutral feedback.The noise blast difference for positive-neutral was significantly negatively correlated to the noise blast difference for negative-neutral (r ¼ À0.48, P ¼ 0.008), indicating that shorter noise blasts after positive feedback (compared to neutral feedback) were related to longer noise blasts after negative feedback (compared with neutral feedback).Next, noise blast differences were correlated with the exit questions.The difference of negative-neutral was positively correlated to the feedback liking of feedback (r ¼ 0.39, P ¼ 0.032) and negatively correlated to the feedback liking of negative feedback (r ¼ À0.57, P ¼ 0.001), indicating that longer noise blasts after negative feedback were related to a stronger preference for positive social feedback and a stronger disfavor of negative social feedback (Supplementary Figure S1a and b).Similarly, the noise blast difference of positive-neutral was negatively correlated to the feedback liking of positive feedback (r ¼ À0.42, P ¼ 0.021) and positively correlated to the feedback liking of negative feedback (r ¼ 0.73, P < 0.001), indicating that a stronger preference for positive social feedback and a stronger disfavor of negative social feedback were related to shorter noise blasts after positive feedback (Supplementary Figure S1c and d). fMRI whole brain analyses Social evaluation.The first goal was to examine neural activity in the contrast positive vs negative feedback at the moment of peer feedback.The contrast Positive > Negative feedback resulted in activation with local maxima in the bilateral lateral occipital lobes, left postcentral, and activation in the right and left striatum, extending into subgenual ACC (Figure 2a and Supplementary Table S1).The contrast Negative > Positive feedback did not result in any significant clusters of activation.Next, we tested how neural activity to positive and negative social feedback related to a neutral baseline condition.The contrast Negative > Neutral feedback resulted in activity in the bilateral insula and mPFC (Figure 2b and Supplementary Table S2).The reversed contrast (Neutral > Negative feedback) did not result in any significant clusters of activation.The contrast Positive > Neutral feedback also revealed widespread activation in the bilateral insula and mPFC.In addition, the contrast resulted in increased activity in the ventral striatum, the subgenual ACC, as well as regions such as the occipital lobe, as shown in Figure 2c (Supplementary Table S2).The reversed contrast (Neutral > Positive feedback) resulted in activity in the right insula and right postcentral gyrus (Supplementary Table S2). Social evaluation conjunction.The analyses above suggested partially overlapping activation patterns for positive and negative social feedback, relative to a neutral baseline.To formally investigate the regions that were activated both after negative social feedback and after positive social feedback, we conducted a conjunction analyses to explore a main effect of social evaluation.Common activation across both positive and negative social feedback was observed in the insula and the mPFC, as well as the bilateral occipital lobes, including left fusiform face area (Figure 2d and Supplementary Table S3). Brain-behavior associations Noise blast duration.To test the association between brain activity and behavior in response to negative social feedback, we conducted a whole brain regression analysis at the moment of receiving negative social feedback (relative to neutral feedback; Negative > Neutral), with the difference in noise blast duration after negative and neutral feedback as a regressor.This way, we tested how initial neural responses to feedback were related to subsequent aggression.The analyses revealed that increased activation in the right dorsal lateral PFC (dlPFC) was associated with smaller increases in noise blast duration after negative social feedback compared with neutral feedback (Figure 3).A similar relation was observed for the left amygdala, left hippocampus and bilateral superior parietal cortex (Supplementary Table S4).The reversed contrast (positive relation between Negative > Neutral feedback and noise blast length difference) did not result in any significant activation. Discussion This study investigated the relation between negative social feedback and subsequent aggression, using neuroimaging.The goals of this study were 3-fold: (i) to disentangle neural signals of positive and negative social feedback, (ii) to examine aggressive responses toward the person signaling negative social feedback and (iii) to test whether lateral PFC activity is related to aggression regulation after experiencing negative social feedback.To these ends, we developed a new social peer evaluation paradigm that included neutral feedback (to be able to compare positive and negative feedback to a neutral baseline) and the possibility to retaliate to the peer that gave the feedback (to be able to study aggression related to social feedback).In line with prior behavioral studies, we found that negative social feedback was related to applying a longer noise blast toward the peer (Chester et al., 2014).At the neural level, conjunction analyses showed that both negative and positive social feedback resulted in increased activity in the mPFC and the bilateral insula.Comparing the conjunction analyses with the separate contrasts of negative and positive vs neutral feedback showed that positive feedback resulted in increased activity in the striatum and the ventral mPFC, whereas negative feedback activation merely overlapped with dorsal mPFC and insula activation observed following both positive and negative feedback.Finally, we found that increased lateral PFC activity after negative social feedback was associated with relative shorter noise blast durations after negative feedback, indicative of more aggression regulation. Results of prior studies left undecided whether there is a unique neural coding for negative social feedback compared with positive social feedback.In this study we found that, consistent with prior studies (Guyer et al., 2009;Davey et al., 2010;Gunther Moor et al., 2010), there was increased activity in the ventral mPFC and the striatum after positive feedback.Numerous studies have shown that the striatum is involved in reward processing (for a review, see Sescousse et al. 2013) and this fits well with theories suggesting that positive evaluations and social acceptance activate brain regions overlapping with those that are activated by the primary feelings of reward (Lieberman and Eisenberger, 2009).Notably, there was no neural activation that was specific for negative social feedback.In Cyberball paradigms, a number of studies observed specific heightened activity in insula and ACC in response to social rejection, which was interpreted as the feeling of social pain (Eisenberger and Lieberman, 2004;Lieberman and Eisenberger, 2009).There are several differences in the experimental paradigms, however, that may explain the divergent results.That is to say, in Cyberball paradigms social rejection is unexpected (e.g.exclusion after a period of inclusion) and is therefore likely to violate social expectations.In contrast, in social evaluation paradigms such as used in the current study, equal proportions of negative, positive and neutral feedback are presented, which may result in more equal saliency of negative and positive feedback.The current findings, which show enhanced insula and mPFC activity following both positive and negative feedback (relative to neutral feedback), suggest that the insula and mPFC in social evaluation paradigms might work as a salience network, and signal events that are socially relevant (Guroglu et al., 2010[TQ1]; Van den Bos et al., 2011[TQ1]).Resting-state fMRI studies confirm that these regions are often active in concert, and have referred to this network as a salience network (Damoiseaux et al., 2006[TQ1]; Jolles et al., 2011[TQ1]; Van Duijvenvoorde et al., 2015[TQ1]).Future research may disentangle the role of expectation violation in more detail by asking participants to make predictions about whether they expect to be liked (Somerville et al., 2006;Gunther Moor et al., 2010), in combination with positive, negative and neutral feedback. An additional goal of this study was to examine the association between brain activation and behavioral responses to negative social feedback.A vast line of research has already shown that social rejection can result in retaliation (Twenge et al., 2001;Leary et al., 2006;DeWall and Bushman, 2011;Chester et al., 2014;Riva et al., 2015).Our study shows that receiving negative social feedback is also followed by more aggressive behavior (i.e. by a longer noise blast toward the peer).In addition, we show that more activity in the right dlPFC is related to 'less' aggression after negative social feedback (compared with neutral feedback), indicating that the lateral PFC is an important neural regulator of social aggression.Several studies on structural brain development have shown that the quality of brain connectivity between the PFC and the striatum is related to impulse control (Peper et al., 2013;van den Bos et al., 2014).That is to say, a large study on structural brain connectivity in typically developing individuals (258 participants, aged 8-25) revealed that less white matter integrity between subcortical and prefrontal brain regions was associated with more trait aggression (Peper et al., 2015).Moreover, Chester and DeWall (2015) recently demonstrated that more functional connectivity between the nucleus accumbens and the lateral PFC during decisions about aggressive acts was related to less behavioral aggression.This study is the first study to investigate aggressive responses after positive, neutral and negative feedback, and shows a role of the dlPFC in individual differences in the regulation of aggressive behavior. Some limitations regarding this study need to be acknowledged.First, although the noise blast is often used as a measure of aggression (e.g.Bushman, 2002;Chester et al., 2014;Riva et al., 2015), our cover story stated that the peers would not hear the noise blast.That is to say, the aggression measure may reflect frustration and anger, and hypothetical aggression.Future research should further test the ecological validity of the noise blast as a measure of aggression by including additional measures of aggression or information on participants' histories of aggressive behavior.Secondly, our paradigm did not include an 'opt out' option, that is, we told participants to always push the noise blast button, even after positive feedback.This was done to keep task demands as similar as possible between the conditions.We explained that the noise would be very short and at very low intensity if the button was released as quickly as possible.However, participants may have wanted to refrain from any noise blast after positive feedback.Future research could take this into account by implementing options to respond either positive, neutral or negative toward the peer, as can for example be implemented by using symbols (Jarcho et al., 2013). In conclusion, we found evidence that the insula and mPFC generally respond to socially salient feedback, with no significant differentiation between negative and positive feedback.Positive social feedback received less attention in prior research and it has often been used as a baseline, but our findings show activation in the ventral mPFC and the striatum that is stronger for positive feedback.Additionally, the lateral PFC emerged as an important modulator for individual differences in aggression regulation.This may imply that individuals who show strong activation in the lateral PFC after negative social feedback may be better able to regulate behavioral impulses, and speculatively, impulsive responses in general (Casey et al., 2011).This hypothesis should be addressed in longitudinal research, including more general measures of impulsivity.An interesting direction for future research is to examine the neural mechanisms underlying social evaluation and aggression regulation processes in populations that are known for difficulties with response control and affect regulation, such as ADHD (Evans et al., 2015), externalizing problems (Prinstein and La Greca, 2004) and depression (Nolan et al., 2003;Silk et al., 2014). Fig. 1 . Fig. 1.Social Network Aggression Task.(a) The different feedback types: positive, neutral and negative.(b) Visual representation of intensity buildup of the volume bar.(c) Display of one trial and timing of the SNAT.(d) Noise blast duration across the different social feedback conditions.Asterisks indicate significant differences with P < .05. Fig. 2 . Fig. 2. Whole brain full factorial ANOVA conducted at group level for the contrasts (a) Positive > Negative feedback, (b) Negative > Neutral feedback, (c) Positive > Neutral feedback and (d) the conjunction of the Positive > Neutral and Negative > Neutral feedback contrasts.Results were FDR cluster corrected (P FDR < 0.05), with a primary voxel-wise threshold of P < 0.005 (uncorrected). Fig. 3 . Fig. 3. Brain regions in the contrast Negative > Neutral feedback that were significantly negatively correlated with the difference in noise blast duration after negative vs neutral feedback trials.Results were FDR cluster corrected (PFDR < 0.05), with a primary voxel-wise threshold of P < 0.005 (uncorrected).The right panel shows the negative relationship between difference in noise blast duration and right dlPFC (for visual illustration only, no statistical tests were carried out on the region of interest).
2018-04-03T05:43:56.169Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "2829fc42e6a12c49d6d70e10eb5adfc26c452569", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/scan/article-pdf/11/5/712/27102549/nsv154.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "2829fc42e6a12c49d6d70e10eb5adfc26c452569", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
146121123
pes2o/s2orc
v3-fos-license
Torsion-type $q$-deformed Heisenberg algebra and its Lie polynomials Given a scalar parameter $q$, the $q$-deformed Heisenberg algebra $\mathcal{H}(q)$ is the unital associative algebra with two generators $A,B$ that satisfy the $q$-deformed commutation relation $AB-qBA= I$, where $I$ is the multiplicative identity. For $\mathcal{H}(q)$ of torsion-type, that is if $q$ is a root of unity, characterization is obtained for all the Lie polynomials in $A,B$ and basis and graded structure and commutation relations for associated Lie algebras are studied. Introduction The main objects considered in this article are the q-deformed Heisenberg algebras H (q), the parametric family of unital associative algebras with two generators and defining commutation relations When q = 1, this relation becomes AB − BA = I, which is satisfied by the linear operators A = d dx : f (x) → f ′ (x) and B = M x : f (x) → x f (x) acting on suitable invariant linear spaces such as the linear spaces of all polynomials or all formal power series in an indeterminate x with complex or real coefficients, or differentiable real-valued or complex-values functions in a real variable x with usual definitions of derivative operator from calculus. This follows from Leibnitz rule d dx x · f (x) = x d dx f (x) + d dx x f (x). Up to a constant scaling factor (involving Planck's constant) this is the Heisenberg canonical commutation relation of Quantum Mechanics. The q-deformed Heisenberg algebras are important in modern Quantum Physics and Mathematics (see [4] and references there). In noncommutative geometry, and investigations on quantum groups and quantum spaces, the q-deformed Heisenberg algebras appears as one of the key examples and as a building block for other non-commutative objects. In the calculus of q-difference operators and in q-difference equations-a subject whose history goes back well over one and a half century, to Euler and Jackson-the q-deformed Heisenberg commutation relation plays the same fundamental role as the undeformed commutation relation in the differential calculus and differential equations. Partly thanks to this, the most efficient way of obtaining many central results in q-combinatorics and the theory of q-special functions is to make use of q-deformed Heisenberg algebras and their representations. Whenever the scalar field has characteristic 0, the undeformed Heisenberg algebra (q = 1) is a simple algebra in the sense that the only two-sided ideals are the zero ideal and the whole algebra. On the other hand, the q-deformed Heisenberg algebra for q = 1 is not simple and has many two-sided nontrivial ideals. If moreover q = 1 is a root of unity, then the center consisting of all elements commuting with all elements of the algebra is nontrivial, that is consists not only of constant multiples of the identity element [4,5]. Any associative algebra yields a Lie algebra when associative multiplication is replaced with the commutator as a Lie bracket and any subset of the associative algebra generates either the whole Lie algebra or a proper Lie subalgebra. If the associative algebra is infinite-dimensional, then the resulting Lie algebra can be either finite-dimensional or infinitedimensional and can have in general complicated structure and properties as a Lie algebra. It is an important and interesting problem to characterize all elements of an associative algebra belonging to a Lie subalgebra generated by a given subset of the associative algebra. Any element in the Lie subalgebra can be uniquely represented as a finite linear combination of elements in any basis of the Lie subalgebra, and so, for a given subset of the associative algebra, the description of the bases in the Lie subalgebra generated by this subset yields a description of all elements of the associative algebra belonging to the Lie subalgebra, as finite linear combinations of the basis elements. We show in this work this interesting property of the Lie algebra being studied that the elements can be represented in other ways than using linear combinations of iterated commutators of the generators. This involves the fact that in noncommutative associative algebras defined by generators and relations, the elements are given often in the form of noncommutative polynomial expressions in generators inherited from the free algebra with proper identification of equal elements due to reductions following from commutation relations. In order to determine whether an element of the associative algebra belongs to the Lie subalgebra one needs to describe a basis in this Lie subalgebra and then determine whether and how this given element can be expressed as a linear combination of the elements of that basis or not. Thus, formulas of rewriting elements when possible using repeated commutators and linear combinations, starting from the initial generating set of the Lie subalgebra, are important as well. The Lie subalgebras for q-deformed Heisenberg algebra H (q) when q is not a root of unity has been considered in [3] where especially the Lie subalgebra generated by generators A and B has been studied in more detail and in particular its bases and some properties have been described. This paper is devoted to Lie subalgebras for q-deformed Heisenberg algebras H (q) when q is a root of unity, which is a natural continuation of [3]. Specifically, for the Lie subalgebra generated by the generators A and B the basis is described in terms of A and B, and thus a characterization of elements of the Lie subalgebra generated by them is achieved. The formulas allowing to rewrite elements written in a standard normal form of H (q) in terms of these basis elements of this Lie subalgebra are obtained and some properties of the basis and graded structure of the Lie subalgebra are described. Preliminaries Let F be a field, and let A be a unital associative F-algebra, which we simply call an algebra throughout. We turn A into a Lie algebra over F with Lie bracket given by [ f , g] := f g − g f for all f , g ∈ A . We reserve the term subalgebra to refer to a subset of A that is also an algebra using the same operations, and so that this kind of a substructure under the algebra structure is distinguished from the corresponding substructure under the Lie algebra structure, we use the term Lie subalgebra to mean a subset of A that is also a Lie algebra over F under the same Lie algebra operations. We treat the terms ideal and Lie ideal similarly. However, the term derived algebra refers to a Lie algebra structure, as we are not interested in any analogue of it in the associative structure. Given f 1 , f 2 , . . . , f n ∈ A , we define the Lie subalgebra of A generated by f 1 , f 2 , . . ., f n as the smallest Lie subalgebra B of A that contains f 1 , f 2 , . . ., f n ; i.e., if C is a Lie subalgebra of A , with f 1 , f 2 , . . ., f n ∈ C and that C is contained in B, then B = C . The elements of B are called the Lie polynomials in f 1 , f 2 , . . . , f n . Denote by I the unity element of A . Given any nonzero element f of A , we interpret f 0 as I. Given f ∈ A the linear map ad f : Fix a q ∈ F. The q-deformed Heisenberg algebra is the unital associative F-algebra H (q) generated by two elements A, B satisfying the relation AB − qBA = I. By a simple application of [6, Lemma 1.7], the relation AB − qBA = I cannot be expressed in terms of only Lie polynomials in A, B for all cases for q considered in [3] and also in this work. Denote the set of all nonnegative integers by N, and the set of all positive integers by Z + . If q / ∈ {0, 1}, then by [5,Corollary 4.5], the elements form a basis for H (q). Define {0} q := 0, and for each n ∈ Z + , we recursively define {n} q := 1 + q{n − 1} q . That is, {n} q = 1 + q + · · · + q n−1 . If q = 1, then {n} q = 1−q n 1−q . The Gaussian binomial coefficients or q-binomial coefficients are recursively defined by for any n, k ∈ N. These q-binomial coefficients satisy the symmetry property n k q = n n−k q for any k ∈ {0, 1, . . . , n}, and as a consequence also the properties 2.1 Structure constants of the q-deformed Heisenberg algebra If A is an algebra with basis {β j : j ∈ J} for some index set J, then it is worthwhile to know how the product of any two basis elements β j and β k can be expressed as a linear combination of {β j : j ∈ J}, i.e., β j β k = ∑ i e i ( j, k)β i for some scalars e i ( j, k). We refer to the scalars e i ( j, k) as the structure constants of the algebra A with respect to the basis {β j : j ∈ J} of A . In this subsection, we illustrate how to obtain the structure constants of H (q) with respect to its basis (2). From [5,Equations (18),(19),(39)] and from [3, Proposition 3.3], an algorithm for expressing the product of any two elements in (2) as a linear combination of (2) can be completely determined using the relations All such relations are consequences of the simple relation AB − qBA = I. The basis elements from (2) are essentially of the following three types: where k ∈ N, l ∈ Z + . We discuss in this subsection products of two basis elements from (2) under different cases based on whether each factor is of the type (12), (13), or (14). The simplest cases are when we have a product of two basis elements both of type (12), the case when we have a basis element of type (12) multiplied by a basis element of type (13), and the case when we have a basis element of type (14) multiplied by a basis element of type (12). That is, if we let m, k ∈ N and n, l ∈ Z + to be arbitrary, we have As we consider more complicated cases, the relations (8) to (11) will turn out to be useful. The relation (8) is relevant when we have a product of a basis element of type (13) multiplied by a basis element of type (12), and also in the case when we have a product of two basis elements of type (13). More explicitly, we have For the case when we have a basis element of type (12) multiplied by a basis element of type (14), and the case when we have a product of both with type (14), the relation (9) can be used to obtain The remaining cases involve less simple computations involving the relations (10), (11). For notational convenience, for each l ∈ Z + and each i ∈ {0, 1, . . . , l} we define and so the relations (10), (11) can be rewritten as respectively. Consider the product If n < l, we rewrite the product as and replace A n B n using (24). This results to in which we rewrite [A, B] m+i B l−n using (9), and we get Similar computations can be used for the product B n [A, B] m · [A, B] k A l . The relations that show the structure constants are the following. We summarize in Table 1 the relations we have discussed in this subsection that give the structure constants of H (q) with respect to the basis (2). Lie polynomials when q is not a root of unity Denote by L(q) the Lie subalgebra of H (q) generated by A, B. Following the notation in [3,Section 5], if q is nonzero and not a root of unity, we denote H (q) by H (q), and L(q) by L(q). Then by [2, Lemma 5.1] the elements in (2), except any power of A or B with exponent not equal to 1, can be expressed as elements of L(q). Such elements are Furthermore, the above elements form a basis for L(q) [3, Theorem 5.8]. It was also shown in [3] that L(q) is a Lie ideal of H (q). Thus, if we compute the Lie bracket of an element in (30) with an element in (2), whether the latter is in (30) or not, then the result is a linear combination of (30). A further result in [3] is that the resulting quotient Lie algebra H (q)/L(q) is one-step nilpotent. This implies that the Lie bracket of any two elements in (2) that are both not in (30) can be expressed uniquely as a linear combination of (30). The Lie polynomials for the case q = 0 are discussed in [3, Section 4], while the case q = 1 leads to a trivial low-dimensional Lie algebra as discussed in [3, Section 1]. Consequences of q being a root of unity on structure constants and commutators Throughout, we assume that q is a root of unity, and we denote by p the least positive integer that satisfies the equation q p = 1. As mentioned in Section 2.2, we have a trivial Lie algebra of Lie polynomials when q = 1, and so we further assume that q = 1, and hence p ≥ 2. Following the terminology in [4, Definition 5.2], the said conditions mean that q is of torsion type with order p. To remind us of these restrictions, we denote H (q) and L(q) by H p and L p , respectively. We extend the terminology by calling L p a torsion-type q-deformed Heisenberg algebra with order p. We make an important note here that the relation AB − qBA = I still implies, by the Diamond Lemma [1, Theorem 1.2] and also by [4, Theorem 3.1] that the elements (2) still form a basis for H p . Then n = N p + n for some integer N, and since q p = 1, we further have {n} q = 1−q n 1−q , which is nonzero if n = 0 by the minimality of p. Thus, Gvien l ∈ Z + and i ∈ {1, 2, . . . , l}, the consequences on the q-binomial coefficient l i q depends on comparison of l with p. Suppose that l < p, by the properties of q-binomial coefficients, (see for instance [4, p. 185]) we have the identity where each factor 1 − q x in (33) satisfies 1 ≤ x < p. By the minimality of p, we have l i q = 0 whenever l < p, 1 ≤ i < p. Suppose l = p. Then the numerator of (33) becomes zero, while the denominator is still nonzero because all the factors 1 − q y in there satisfy 1 ≤ y < p. Then by further using (6), (7), (32), we find that p i q is 1 if i ∈ {0, p} and is 0 if i ∈ {1, 2, . . . , p − 1}. For the case l = p + 1, we use (5), and by some routine computations, it can be shown that p+1 i q is also 1 if i ∈ {0, p} and 0 if i ∈ {1, 2, . . . , p − 1}. This can be routinely extended to any l ≥ p by induction. By these comments we have Consequences on the structure constants We now discuss the consequences of the facts explained in the beginning of Section 3 above about the structure constants of H p with respect to the basis (2). Recall that the structure constants can be obtained from the relations (15)-(21) and (26)-(29). Based on the appearance of (15)-(17), the structure constants in these relations are not affected by q being a root of unity. As for the relations (18)-(21), the appearance of the structure constants suggests that we simply have to reduce the exponents to the smallest nonnegative representatives of these integers modulo p. We still use our notation (31) for this, and so (18)-(21) become As for the more complicated relations, we first note that using (6), (7), (22), (23), we have Simple routine computations can be used to show that p divides the integers l+1 The relation (45) can now be used to derive the parallels of (26)-(29). We simply make use of the same pattern of computations described in Section 2.1 except that we use (36)-(39) instead of (18)-(21), and we use (45) instead of (10), (11). The resulting relations are Consequences on the commutators of basis elements In this subsection, we discuss the consequences of q being of torsion type on the commutator of two arbitrary basis elements of H p from (2). where m, n are both nonzero but both are not congruent to zero modulo p. In all future references, by a basis element of Γ , we refer to the basis of Γ given in Definition 3.1 above. We claim that the linear subspace Γ has this interesting property that the commutator of any two basis elements from (2), when expressed as a linear combination of (2), has no term that is in Γ . To establish this claim, we need the following. Lemma 3.2 For every m, k ∈ N and every n, l ∈ Z + , the commutators when expressed as linear combinations of the basis elements (2), do not have a term which is a basis element of Γ . Proof. We first tackle the most complicated commutator, which is (53). Consider the case n > l. Using (26), (29), we have where e i = q (k+i)(n−l) c i (l) − q −(m+k)l d i (l). Suppose that i assumes some value j such that the corresponding term is a basis element of Γ . We prove that e j = 0. Since [A, B] m+k+ j A n−l is a basis element of Γ , we have Using (56), for some integer N, we have n − l = N p, and so e j = (q p ) (k+ j)N c j (l) − q −(m+k)l d j (l), which simplifies into e j = c j (l) − q −(m+k)l d j (l). The condition (57) implies that −(m + k) = j − pJ for some integer J, which means that we can further simplify e j as e j = c j (l) − q jl (q p ) −J d j (l) = c j (l) − q jl d j (l). At this point, we use (22), (23) to obtain We bring the reader's attention to the left-most factor in the right-hand side of (58), which is q ( j+1 2 ) − q jl−( l 2 )+( l− j 2 ) . The exponent of q in the second term can be simplified as Therefore q ( j+1 2 ) − q jl−( l 2 )+( l− j 2 ) = q ( j+1 2 ) − q ( j+1 2 ) = 0, and so e j = 0. The proof for the case n < l is similar, but the relations (27), (28) are used instead of (26), (29). Next, we consider commutators of type (52) and (54). Using the methods in Section 2.1, we have If [A, B] m+k A n+l or B n+l [A, B] m+k is a basis element of Γ , then we have the condition that both m + k and n + l are congruent to zero modulo p. Then for some integers M, N, we have k = M p − m and n = N p − l. By routine computations, we have ml − nk = p(Nm + Ml − MN p). Thus, (61), (62) become from which we see that, since q p = 1, the coefficients of any basis element of Γ is zero in the linear combinations for the commutators (52), (54). Finally, we have the commutators of type (50), (51). Using the techniques in Section 2.1, we have If we assume that Proof. This is equivalent to saying that every Lie polynomial in A, B, when written as a linear combination of (2), does not have a term which is a basis element of Γ . This clearly holds for the generators A, B. We are done if we show that given Lie polynomials u, v, the commutator [u, v], when written as a linear combination of (2), does not have a term which is a basis element of Γ . Each of u, v, anyway, is a linear combination of (2), and so we are done if we show that the commutator of any two basis elements from (2), when written as a linear combination of (2), does not have a term which is a basis element of Γ . For this, we need the commutator Table 2 are marked with a dash because of the skew-symmetry of the commutator as a Lie bracket operation, while the cell with 0 is because any two powers of [A, B] commute. The equation numbers in the other cells indicate the commutator considered in the proof of Lemma 3.2 which does not have a term that is a basis element of Γ . We note here that (53) has the restriction n = l, but we further justify that even if we have n = l, by the methods in Section 2.1, the result is a polynomial in [A, B] and in such an expression, none of the basis elements of Γ has a nonzero coefficient. This completes the proof. The Lie algebra L p In this section, we construct a Lie subalgebra of H p with a given basis, and we show that this is isomorphic to L p . where n, k, l ∈ Z + such that n ≥ 2 implies n − 1 ≡ 0 mod p, and that in each element of the form (66), at least one of k, l is not congruent to zero modulo p. We define M p as the linear subspace of K p spanned by all elements in (65) except A, B. Since the elements in (65) are among (2) that are linearly independent in H p , we find that the elements in (65) form a basis for K p , while the elements in (65) except A, B form a basis for M p . In any of the above cases, the elements from (2) The following relations can be derived using the techniques in Section 2.1. The next goal is to isolate elements of the form [A, B] k+1 A l , B l [A, B] k+1 , and [A, B] k+2 in the right-hand sides of (72), (73), (75), and so we define the following. With reference to (76), (77), we show how to construct [A, B] k+1 A l and B l [A, B] k+1 as Lie polynomials for the cases l ≡ 0 mod p and k +1 ≡ 0 mod p, respectively. The trick is that in such cases, we have l −1 ≡ 0 mod p and k ≡ 0 mod p, respectively, and so by (76), (77), the Lie polynomial constructions α(k, l − 1) and β (k − 1, l) are possible in each respective case. We use the techniques in Section 2.1 to obtain In (79), the condition k + 1 ≡ 0 mod p is needed. Otherwise, we have a basis element of Γ instead of an element in (65). This, in turn, ensures that the scalar coefficient 1−q k+1 is nonzero and our Lie polynomial construction is successful. Similar reasoning applies to (80). What remains to be shown is that, with reference to (78), how to construct a nested commutator equal to [A, B] k+2 when k + 1 ≡ 0 mod p (where we note that k ≥ 1 because p ≥ 2). Proof. By Lemmas 4.3 and 4.4, K p is a Lie subalgebra of H p that contains A, B and is contained in L p . Therefore, K p = L p . To prove (ii), use Lemma 4.3 and Theorem (i). Lie polynomials in each Z-gradation subspace: a summary We now characterize the Lie polynomials in each of the Z-gradiation subspaces in {H n : n ∈ Z}. For each gradation subspace H n , we simply identify which basis elements of H n described in the end of Section 2.1 are among the defining basis elements of the Lie algebra K p = L p from Definition 4.1. We write in concise form all the nested commutator constructions in the proof of Lemma 4.4. Given k ∈ N and n, l ∈ Z + , these nested commutators are
2019-05-06T17:17:42.000Z
2017-10-04T00:00:00.000
{ "year": 2019, "sha1": "b29c82e59117e0f1beebc3a9d428bb8579a83aed", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1905.02156", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7f1355c2a38f5212dca3ebe9637056b4db2dc082", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
61571546
pes2o/s2orc
v3-fos-license
Geographical failover for the EGEE-WLCG Grid collaboration tools . Worldwide grid projects such as EGEE and WLCG need services with high availability, not only for grid usage, but also for associated operations. In particular, tools used for daily activities or operational procedures are considered to be critical. The operations activity of EGEE relies on many tools developed by teams from different countries. For each tool, only one instance was originally deployed, thus representing single points of failure. In this context, the EGEE failover problem was solved by replicating tools at different sites, using specific DNS features to automatically failover to a given service. A new domain for grid operations (gridops.org) was registered and deployed following DNS testing in a virtual machine (vm) environment using nsupdate, NS/zone configuration and fast TTLs. In addition, replication of databases, web servers and web services have been tested and configured. In this paper, we describe the technical mechanism used in our approach to replication and failover. We also describe the procedure implemented for the EGEE/WLCG CIC Operations Portal use case. Furthermore, we present the interest in failover procedures in the context of other grid projects and grid services. Future plans for improvements of the procedures are also described. The level of availability requested from LCG/EGEE was assessed in 99% of the time, corresponding specifically to no more than 87 hours of downtime in a given year.The achievement of such a level of service can be troubled by failures or even planned events, as shown in Table 1 statistic (source: [14]).Even if more related to networking services failures, this list is interesting to evaluate the problems from the site perspective.Sites in most cases are working to remove the single points of failures (SPOF), therefore our EGEE failover activity is not concentrated on this local task; it is left to the sites' good sense and method.On the contrary, the activity is focused on the remaining slice of probability that local measures are not successful, not strong enough or that, in some extreme case (of very big outages), they cannot provide a remedy at all.We thought that with a geographical failover approach we can compensate this portion of failures. With this goal, we have investigated the following three technical solutions: DNS (Domain Name System) name re-mapping, GSLB (Global Server Load Balancing) and a failover approach based on BGP (Border Gateway Protocol) [14] [16]. GSLB is an available technique to monitor the health of a set of services, with the purpose to switch the users to different data centres depending from heavy load, failure or better round trip time.It can be provided by additional software on top of the most common DNS servers, as well as by dedicated hardware available from several IT companies.BGP is the core routing protocol in use among Internet Service Providers, but unfortunately it is indicated to provide failover and load balancing for large networks and entire data centres.In fact it is technically not possible to use BGP to advertise single hosts.That is: solutions based on BGP failover and GSLB are too complex, too expensive or not oriented to our target. The choice fell on a standard DNS approach, as discussed in the following section.The DNS approach consists of mapping the service name to one or more destinations and update this mapping whenever some of the destination services are detected to be in failure.The mechanism is better clarified by taking, for example the CIC portal, one of the EGEE operational tools. The CIC portal master instance is physically in France, while the backup replica is maintained in Italy.These instances are reachable respectively as cic.gridops.org(IP1) and cic2.gridops.org(IP2).When the main instance is no longer reachable, acting on DNS, it is possible to point cic.gridops.orgto IP2.If well performed, this operation guarantees the service continuity and in the same time it is completely transparent to the final user.It is implied that this mechanism needs, as an essential component, a proper data-synchronization procedure. 2.1.2. The new "gridops.org"domain The domain names of existing EGEE operational tools were individually registered by their respective institute.A scenario with non homogeneous names as cic.in2p3.frfor the CIC portal, goc.grid.sinica.edu.tw for Gstat, needs an additional layer.This layer was provided by the registration of a new domain: "gridops.org".This stratagem allows: • an easier renaming of all the involved services; • a transparent alias level upon the real service names and IPs; • the possibility to quickly remap these aliases thanks to the very short TTLs. The new domain has been registered by the INFN-CNAF institute, and managed on a CNAF DNS server, while a slave server has been provided and set up by the GRNET institute [17], which is also involved in the EGEE project activities.The table 2 clarifies the adopted schema.The complete list is available on the failover web pages [18].The DNS "zones", where the configuration of every domain name is located, can be managed directly on the DNS server machine or through nsupdate program.Nsupdate, is an utility available for the BIND DNS software [19], it supports cryptographic authentication allowing the dynamic and remote insertion and deletion of DNS records.It is available on main Linux distributions, as well as the other main unix clones and Windows. 2.1.3. The use of the new domain name From the table 2, consider two possible conditions: 1. the primary and backup instances are both available.They can be queried by the users; 2. the primary instance service fails, but the backup instance service is available.In this case the DNS is operated to point the primary instance name to the backup service. The switch operated in condition 2, is performed changing the CNAME DNS record to point to another host name, which corresponds to the backup service (see RFC 1034 and RFC 1035 for the DNS basic concepts, that cannot be discussed here). This new domain approach has created some technical issues.First of all, absolute hyperlinks are incompatible with a DNS remapping.A partial rewriting of the EGEE operational tools was a constraint but this was solved in a short amount of time.Furthermore, the services running on SSL using an X509 certificate are subject to produce warnings on every web browser, if contacted by a name which is not the same host name the certificate has been produced for.This has been solved by producing certificates with alternative names, adding the names under gridops.org(most Certification Authorities should be able to provide this (as INFN CA [20], for example).Another technical issue could be for example the one of HTTP sessions being broken by a DNS switch, but our scenario is not as critical as the case of bank, e-commerce, medical or military procedures, so this problem has less priority for us. DNS downsides The biggest downside of this approach is related to the caching issue.The TTL (Time To Live) tells a DNS server how long it is allowed to cache a DNS record.Caching is a standard feature in the DNS.When one DNS server queries another DNS server for a record, it is allowed to store the answer in its local memory cache for the number of seconds specified by the TTL value.Only after the TTL has expired will the caching server repeat the query to get a fresh copy of the DNS record from the authoritative server.For the needs of this failover activity, the TTL of gridops.orgDNS records has been configured to 60 seconds.It means that our wish is to be able to switch from a failing system to its replica in no more than this amount of time.But this may be not always as easy as desired. It may happen that some name servers do not honor all TTLs of resource records that are cached, nevertheless this is more frequent for commercial providers than within our research networks. The caching issue must also be considered at browser level as well as Operating System level.In general, by default, the three main operation systems, that is Microsoft Windows, Mac OS and Linux, are not affected by TTL caching issues [21] [22]. Concerning the web browsers, the behavior between the widely used Microsoft Internet Explorer and the open source Mozilla Firefox is quite different.In detail, Mozilla Firefox is very conservative with its value of 60 seconds [23], while Microsoft Internet Explorer, with a 30 minutes value, attempts to save many more queries [24].This of course only affects users who have visited the web site immediately before a DNS update.New visitors, or returning visitors who closed their browser or waited more than 30 minutes since their last visit, are not affected.Furthermore, it is possible to adjust the length of time Microsoft Explorer caches DNS records, by updating a registry setting on the client machine.But this is of course neither practical nor user-friendly.Thirty minutes is a considerable time if related with service availability.Nonetheless, considering our goal of a 99% uptime/year and considering the downtime statistics of the last two years, this can be tolerated at this implementation stage. CIC Portal The CIC Operations Portal [6] groups together, on a single web interface, several tools that are essential in EGEE, for the operational management, Grid monitoring and trouble-shooting.For this reason some recovery procedure was needed to be provided as soon as possible: planned or unexpected service outages could break the continuity of several daily activities. As described in [7], the CIC portal is based on three distinct components: • The web module, hosted on a web cluster, and mainly consisting of PHP code, html pages and CSS style sheets; • The database (Oracle); • The data processing system, based on the Lavoisier web service [25] [25].The failover concern played a great role in the way this architecture has been designed: the clear separation between the different modules implies indeed an easier replication work.All modules master instances are hosted and running at IN2P3/CNRS computing centre, located in Lyon, France, while replicas have been set up at CNAF/INFN, Bologna, Italy. The first occasion where this solution was actually used was one big service interruption for maintenance at the IN2P3 Lyon computing centre, planned for December 2006.A switchover to the backup instance, operated by system administrators on both sides, eventually reduced this outage from several days, as originally planned, to a few hours.The newborn CNAF provided the service for one entire week.Afterwards, the CIC portal users were benefitting from this failover setup in two other cases: • • give a well defined structure to the code tree and provide it through a versioning system; • make the code portable, whilst avoiding having too many dependencies as much as possible. The replication started by building Apache httpd server and PHP5 at CNAF, and configuring them by taking all the requirements from the original installation, including optional packages such as Oracle instantclient, and all the other needed libraries. New X509 server certificates have been requested from the two involved Certification Authorities, with the special option of the "certificate subject alternative names", which enables them to be used for SSL/https connections on the new "failover" service names: cic.gridops.organd cic2.gridops.org. Upon this basic service layer, the portal PHP code has been deployed.The code, which resides on a CVS repository, is periodically downloaded by a daily cron job.A special CVS tag named "production" identifies the current stable release.Moreover, the files that need local parameter customization are downloaded as templates, then properly parsed and filled by the update script. Data processing system (Lavoisier) replication. Lavoisier installation [26] requires Java, Apache ANT and the Globus Toolkit 4 Java WebServices core.On the top of this, a Lavoisier parallel instance has been installed following the official instructions.One role of this component is to periodically get data from external sources, therefore replicating the same identical configuration made it a perfectly interchangeable backup. Database replication The database layer represents the most challenging part.The backend is based on Oracle, a database solution with a steeper learning curve, especially concerning its high availability features.The need to build up some production-quality replica in a reasonable amount of time led us to start with an intermediate goal: a manual export and import of the database contents. As long as the amount of data on this database is still within the range of tens of MegaBytes, this kind of dump transfer is always possible.The exported data has been transferred via http and verified by file integrity checker tools, before being applied on the destination instance.We have established a complete, documented procedure, which involved at least 2 persons (one at each side).This operational procedure includes the certification of the wholesome data integrity and the coherence between the two sites, but still needs a short interruption of service.Consequently, a real synchronization solution between the two databases is under study, as exposed in section 4. SAMAP SAMAP (an acronym for SAM Admin's Page) is a web-based monitoring tool for SAM framework job submission.Its purpose is to send tests to all kinds of Grid resources, check the results and publish them on the SAM (see section 3.5).Created and initially hosted in Poznan Supercomputing and Networking Centre (PSNC), Poznan, Poland, after many positive opinions and use, it became an official EGEE operations tool.Then, it has been integrated into CIC Portal and currently it is commonly used by all the EGEE Grid operators and managers. In the following sections an overview of the project has been provided, while more information and updates can be found on the wiki [27].Finally, a Savannah project has been created [28] to accept requests and bug submissions by the users and the EGEE project staff. There are two main components responsible for SAMAP functionality: • Web portal part, implemented as a set of PHP scripts; it is connecting to the Grid User Interface part, to send user's actions, and to the GOCDB, to fetch a list of available sites, Grid users and their roles; it is responsible for end-users recognising, matching their roles with the target Grid sites and displaying the list of available sites for SAM job submission; • User Interface part, implemented as a set of PHP and shell scripts integrated with the SAMclient command line tool installed on the same machine; it is responsible for retrieving and executing actions from the portal part of SAMAP; The portal part is running on the web server machine, and it is currently integrated into the CIC Portal.The UI part is installed on a dedicated machine.Some dedicated Workload Management System (WMS) servers are needed, specifically prepared for SAMAP purposes.The failover of these external requirements are separately provided. For the failover needs, SAMAP has been installed in two independent instances.The main instance is located in CC-IN2P3 centre and the backup one is located in INFN-CNAF centre.Both instances are using two independent WMS servers located in INFN-CNAF (main) and CERN (backup), so the end-user can use alternative WMS server for job submission in case one of them is down.A central CVS repository is used for code and configuration synchronization, with the use of a "production" tag. Using DNS based geographical failover described above we can easily switch from one to the other independent SAMAP instance and it is done almost transparently to the end user.The only side effect is that all the SAM test jobs submitted on the one SAMAP instance are no longer available on the second instance after failover switch.Anyway, due to non-production character of the SAM test jobs (but only for Grid services monitoring and testing purposes), and the fact that they can be easily resubmitted again for a given site, this side effect is not relevant. 3.3. GSTAT, GRIDICE GSTAT [11] is the first service that the failover activity has taken care of.The reason is that its mechanism is simply made of python scripts, producing HTML, images and RRD [29] files.And the user interaction is read-only.But besides these good points, the amount of data collected from the information system of a big Grid can be quite large, leading to the need to just install another instance with only the code and the configuration synchronized.Most of the information gathered by GSTAT, if the periodical query succeeds, is almost identical between two servers.The exceptions that might happen are not worrying, because they give only a slightly different snapshot of a site.On the other hand, the availability, and the response time of a site, can also give opposite values.But if this happens, it means that some instability issue is probably occurring, and having two different views, from distant points of the network, can help when trying to diagnose site problems.For this reason, the requirement to synchronize the monitoring data has not been considered anymore. The code and configuration changes are not frequent, so the method is a semi-automatic CVS update under operator control.GRIDICE [12], although it is more complex than GSTAT as monitoring system, and also involving a Postgres DB, fits in the same line of reasoning, thus it has been only duplicated, and pointed to the same grid information system. 3.4. GOCDB As it appears from the tools map [8], the GOCDB is a highly needed source of information for all the other systems.Because of this central role the EGEE community induced the development team to work strictly coupled with the failover working group to realize a backup replica. Although not yet in place, a DB failover for the Oracle back-end between the main and a backup site in UK is under development.In parallel, a separate web front-end has been provided in Germany by the ITWM [30] Fraunhofer institute, connected to a CNAF DB which hosts a periodic dump of the GOCDB content, ready for a first level of disaster recovering. 3.5. SAM This is the heaviest and most worrying work in progress in the EGEE-LCG failover activity.SAM [8] is made of several components, of both standard and Grid middleware kind.Moreover it is heavily and frequently updated and queried, both from users and other services, and its Oracle DB has a grow rate in the order of 10^5 rows/day, which makes it more delicate than any other to properly replicate and switch. A team from the Polish CYFRONET [31] institute, exploiting the good experience collected on a local instance of SAM provided to the regional grid initiative, will try to start up a collaboration with CERN LCG-3D project, to set up the needed Oracle Streams features which are the base of a SAM replication. A test run on the currently available resources at CYFRONET has shown that the needed computing power can be provided. 4. Future plans and improvements A Materialized View is a complete or partial copy (replica) of a target table from a single point in time.It can be called manually or automatically; when it happens, a snapshot of the DB is delivered to one or more replicas.The Materialized View approach seems to be the fastest solution to develop, but the replicated DB will never be perfectly synchronized.If something is wrong between two snapshots, all the performed modifications will be lost.Besides, Materialized Views can only replicate data (not procedures, indexes, etc) and the replication is always one-way. Oracle Streams represent a complex system of propagation and management of data transactions.It is driven by events, like triggers.The main concerns with Streams are the tuning and administration of these events as well as the management of the correct cause-effect criteria.For a 2-way or a N-ways replication, streams are the only solution because they avoid incidental conflicts that the materialized approach cannot solve.Oracle Data Guard is another possibility to face planned or unplanned outage.This automated software solution for disaster recovery requires one or more stand-by instances that will be kept updated with the master DB.The transactional synchronization with the master DB is achieved through the archived redo logs, which contain all the executed operations on the primary DB.Finally, the modus operandi for applying the redo logs is a tunable process according to either security, performance or availability. As all these methods have pros and cons, a deep investigation is under way in order to answer the specific questions: "How up-to-date does the copy of the data need to be?How expensive is it to keep it updated?" The application of these features to the Oracle based grid tools treated on this document, is following different paths.This depends from the weight of the DB and from the amount of manpower that can be dedicated.For the CIC portal it is possible that the task will be moved on the web code itself.The GOCDB staff, on the other hand, are internally studying a Streams based replication, to replace the current first approach of a manual dump.For the SAM heavy DB, it is likely that the CERN LCG-3D knowledge of Streams will be exploited. 4.2. Distributed monitoring system The failover process consists of three phases: the first step is to detect failure, the second step to identify and activate the standby resource, and the last step for the standby application to go active. For the time being, the failure detection delay can range from a few seconds, when the failure is detected by human loosing the application connection, to a few minutes, thanks to the monitoring systems locally provided by the computing centres.Unfortunately, the procedures that follow the detection take much more time, because they are influenced by a human factor: if one human operator is ready and on duty, he can easily operate a switch, but it is not granted that this is done in a very short time.To correct this weakness, we are working on an automatic switchover capability based on Nagios software. Nagios is an host, service and network monitoring software, well known and stable Open Source solution.What we are doing is designing and developing a distributed matrix of Nagios instances, which can continuously exchange information about the health status of the different monitoring tools that are under control. The idea is based on the following main points: • a set of distributed Nagios servers; • the exchange of tests results between the servers; • strict tests, executed more times to avoid false-positive cases; • quorum-based verdict, inspired by the Byzantine Generals Problem [33], leading to a minimum of four monitoring instances, and the need to reach a quorum at least on three to take a decision.Once a verdict has been reached on the Nagios instances, a DNS switch and the proper changes are performed, to activate the backup instance for the service supposed to be out of service.The last prerequisite of this action is that a positive verdict can also be reached on the status of the backup service. 4. 1 . Oracle replication In order to automate the Oracle back-end synchronization, avoiding the manual transferring of data, we are testing different solutions offered by Oracle Database 10g suite [32].Streams, DataGuard and Materialized Views represent three effective technologies but very different among them. Table 1 . Probability Tree Outage Distribution
2019-02-15T14:18:46.279Z
2008-07-01T00:00:00.000
{ "year": 2008, "sha1": "5a4e5582c02014c8b987a7d7a407317ee5c6a324", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/119/6/062022", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "be16ae59d8392b050666549c861001ddf9e85e01", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
253009541
pes2o/s2orc
v3-fos-license
A Data Driven Modelling Approach for the Strain Rate Dependent 3D Shear Deformation and Failure of Thermoplastic Fibre Reinforced Composites: Experimental Characterisation and Deriving Modelling Parameters : The 3D shear deformation and failure behaviour of a glass fibre reinforced polypropylene in a shear strain rate range of ˙ γ = 2.2 × 10 − 4 to 3.4 1 / s is investigated. An I OSIPESCU testing setup on a servo-hydraulic high speed testing unit is used to experimentally characterise the in-plane and out-of-plane behaviour utilising three specimen configurations (12-, 13- and 31-direction). The experimental procedure as well as the testing results are presented and discussed. The measured shear stress–shear strain relations indicate a highly nonlinear behaviour and a distinct rate dependency. Two methods are investigated to derive according material characteristics: a classical engineering approach based on moduli and strengths and a data driven approach based on the curve progression. In all cases a J OHNSON –C OOK based formulation is used to describe rate dependency. The analysis methodologies as well as the derived model parameters are described and discussed in detail. It is shown that a phenomenologically enhanced regression can be used to obtain material characteristics for a generalising constitutive model based on the data driven approach. Introduction Fibre reinforced plastics (FRPs) have become a widely used material for a broad range of structural applications. Especially, the high capacity for energy dissipation is exploited in components exposed to crash and impact loading scenarios in the automotive [1,2] and aviation industries [3,4]. The key aspect for a reliable design process is the characterisation and accurate description of the material behaviour considering arbitrary three-dimensional loading cases as well as loading rates. Characterisation of material properties is particularly challenging for the through thickness (TT) shear behaviour of composites [5]. They typically exhibit pronounced nonlinear behaviour, which is strongly influenced and changed by the presence of other stress components [6,7]. The challenges of determining such behaviour arise mainly from two domains: (a) specimen design and experimental evaluation and (b) deduction of TT material characteristics and material modelling. Regarding test design and evaluation, many different approaches have been investigated. Jalai and Taheri developed the varying span method, where three point bending experiments with different span to thickness ratios are used in combination with extensive analytical models to determine the initial TT shear modulus [8]. Yoneyama et al. used a curved beam specimen, also subjected to three point bending, with permanent strain measurement throughout the experiment, enabled by digital image correlation (DIC) [9]. They determined the interlaminar shear modulus, as well as other elastic properties, at predefined load levels by utilising supporting finite element analyses (FEA). Several authors found a V-notched specimen in combination with DIC for strain measurement to be suitable for determining quasi-static shear characteristics [10][11][12]. Regarding the experimental determination of the shear properties of fibre reinforced plastic (FRP) at elevated velocities, only a few investigations have been published. Hsiao et al. used a combination of servo-hydraulic test machine, drop tower and split HOP-KINSON pressure bar (SHPB) to investigate a wide range of strain rates [13]. Zhao et al. employed a combination of a universal testing machine and an electromagnetic SHPB [14]. A modified IOSIPESCU-test setup on a servo-hydraulic high speed testing machine [15] was successfully used by Hufenbach et al. to determine rate dependent shear properties in composite materials [16]. The experimental investigations in the present study follow the latter approach. Especially for FEA with respect to crash and impact applications, the in-depth knowledge of the characteristic 3D-stress-strain and -failure behaviour is of high importance. With respect to delamination failure in laminated textile reinforced composites, interlaminar shear properties are of special relevance for the determination of failure onset. Typical approaches to model the material behaviour are based on phenomenologically driven models, in combination with engineering constants. To capture the nonlinearity of the stress-strain relationship under TT shear, different approaches with varying experimental effort and potential for generalisation have been applied in literature. The approach requiring the least additional effort is to define parameter pairs of loading state and corresponding modulus [9]. It yields a good approximation close to the sampling points but is not well suited for extensive extrapolation. In regions of high nonlinearity, a high density of sampling points is required to capture the curve's progression. Alternatively, attempts to model the underlying effects leading to the deviation from linear behaviour have been made. Hassan et al. superposed inelastic behaviour, continuum damage and interface failure to analyse the energy dissipation during quasi-static failure of composite joints [17]. However, the impact of the strain rate on the TT shear behaviour of textile reinforced composites and the respective characteristics has not yet been investigated in detail. In order to add strain rate dependency to initially static models, often a JOHNSON-COOK [18] based approach is used. This enables the model's application to a wider range of problems. However, it doubles the number of model parameters necessary, significantly increasing the effort for parameter identification. This effect is further amplified by the additional interaction parameters often used to unify different loading scenarios [19][20][21]. Additionally, the simultaneous occurrence of different sources of nonlinearity during the loading of composites poses an enormous challenge to realise experimentally and to extract said parameters from the resulting curves. In the field of FRP, no standardised models or methods for parameter identification exist; it is rather up to the engineer to find a suitable model, familiarise oneself enough to understand the physical meaning of the material parameters, what kind of experimental procedure is necessary to trigger underlying effects and how to extract the necessary parameters from the experimental results. Therefore, the aim of this paper is to develop the simplest model possible-with as few parameters as possible-still capable of accurately predicting the macroscopic material response. This type of model is chosen to provide a proof of concept for the proposed modelling approach as well as a stepping stone for the usage of more sophisticated data driven (DD) methods in the field of material modelling-in particular, considering that currently established strategies which focus on deriving 'stand alone' engineering parameter, based on e.g., ASTM or DIN-ISO standards, tend to disregard the majority of the acquired data. This is particularly relevant for highly nonlinear constitutive material behaviour, where conventional characterisation methodologies significantly reduce relevant information on the material behaviour initially observed experimentally, which may be important for subsequent modelling purposes. In addition, more comprehensive data and information are demanded by the rapidly increasing application of (semi-)automatised methods for data processing and analysis. The presented work aims to contribute to a paradigm shift: from material characteristics, which are evaluated at predefined points, towards a characteristic curve based one. To achieve this, here, a purely regression based model is derived from the development of the tangent modulus. The resulting DD model is compared with a classical modulus and strength based (MaS) approach, keeping track of the change in secant stiffness at predefined strain states. Material Configuration and Specimen Preparation For all presented experiments, a multi-layered weft knitted fabric (MKF) reinforcement was used, which has previously been developed and manufactured at the TU Dresden [22]. A characteristic feature of the fibre architecture is the non-crimped fibres of the warp and weft threads without any undulations. Knitting loop threads secure the fibre interlock and prevent the delamination within each individual layer (detailed schematic and computed tomographic illustrations are given in [23]). The MKF consists of commingled hybrid E-glass fibre polypropylene (GF/PP) rovings, with a fibre-fineness of 1400 tex in 1-(warp) and 2-direction (weft) and a loop yarn of 139 tex, representing a layer-wise 3D-reinforcement. The result is an equal mass share of 42 % (warp and weft) and 16.5 % (loop) and a total fibre volume fraction of 55 %. The base plates have been manufactured using the hot pressing technology. A more detailed description of the manufacturing process is given in [15]. The thickness of a single consolidated textile layer equals 0.5 mm. Specimens have been cut from the plates by water jet cutting. Experimental Setup and Testing Program Three different specimen configurations have been used for the evaluation of the strain rate dependent shear behaviour of the GF/PP. Figure 1a illustrates the alignment of each shear specimen configuration: 12-(in-plane), 13-and 31-configuration (out-of-plane). The notation of the specimen configuration conforms to the orientation of the material axes related to the specimen dimensions whereupon the first index corresponds to the length dimension and the second to the width dimension. The specimen has a length of 78 mm, a width of 20 mm and is 45°V-notched in the middle with a notch tip distance of 13 mm. The thickness is 4 mm for 12-and 13-configuration and 10 mm for the 31-configuration. In case of the 31-configuration, a higher thickness is necessary since the material strength is very low, due to the layers being sheared off of one another. The determination of the shear properties has been performed with the IOSIPESCU shear testing device shown in Figure 2a. This test arrangement is in accordance with ASTM D 5379 [24], where the specimen, which is notched on both sides, is clamped and, when compressed, a zone of torque-free shear load between the notches is created [15,16]. The load is applied with a servo-hydraulic test system Instron VHS 160/20 with a load cell of 160 kN, enabling tests at deformation speeds of up to 20 m /s with a load cell accuracy of ±0.5 %. Deformation measurements and subsequent strain analysis were performed using high speed DIC, based on the stochastic grey scale pattern on the specimens' surface. For that, two high speed cameras with a maximum frame rate of 200,000 images per second in combination with the 3D DIC system Aramis by GOM mbH have been used. The shear strain γ is calculated by averaging the values within a region of interest (ROI) and determined by the principal strains ε I and ε I I : Anticipatory to the experimental results and as illustrated in Figure 2b, the shear distribution is of inhomogeneous nature in the analysed area. Therefore, a constant ROI of 15 mm × 3 mm is chosen for all experiments, in which the shear deformation is analysed and averaged, resulting in one representative value. The chosen ROI covers the area between the V-notches of the specimen with a shear-dominated deformation ratio. A current associated shear strain rateγ is determined during the the optical measurements byγ within the ROI, where the measurement time step ∆t corresponds to the camera acquisition rate. The associated shear stress τ is calculated with respect to the loading force at the smallest specimen cross sectional area between the notch tips. Each of the investigated configurations (12, 13 and 31) was tested at four nominal loading velocities: 0.00001, 0.0001, 0.01 and 0.1 m /s. For each of the three lower loading velocities, four repetitions were performed, three for the highest, leading to a total of 45 experiments. The nominal shear strain rateγ of each experiment is calculated between γ = 0.15 % and γ = 0.25 % and is seen as the reference value for the entire experiment, although the value varies slightly throughout the experiment. Subsequently, the shear strain rates determined for the respective experiments are averaged over all samples for each material configuration and loading velocity. An overview over the conducted experiments can be found in Table 1. The slight differences in strain rate at equal loading velocity and geometry are due to differences in the strain localisation behaviour for the different material configurations. Tests per conf. 15 15 15 * In one experiment, the strain measurement was not triggered. It is excluded from the further analyses. Stress-Strain Behaviour and Experimental Classification The experimental stress-strain results for the investigated 12-, 13-and 31-configuration are illustrated in Figure 3. The curves of the 12-and 13-configuration all exhibit an almost linear behaviour at very low strain. With increasing strain, the curves become strongly nonlinear until a relinearisation occurs. In case of the 12-configuration, a slight slope remains, whereas, in the 13-configuration, a plateau of constant stress is reached. The results of both configurations are clipped at a strain of 20 %. At higher strains, the curves begin to steepen quickly. This behaviour can be attributed to the fibres aligning with the load application. Thus, its cause can be found in the specimen's overall structure, not the material behaviour itself. Therefore, the steepening aspect is not considered in subsequent analyses. A low pass filter was applied to the results from the 31-configuration, since high frequency oscillations dominated the raw measurements. The identifiable behaviour begins similar to the other two configurations, though the linear regime is limited to strains smaller than 0.5 %. Again, it is followed by strongly nonlinear behaviour. However, catastrophic failure occurs before re-linearisation can take place. Due to the highly nonlinear nature of the stress-strain curves, the identification of single representative stiffness or failure parameters analogous to materials of brittle characteristic is not possible. Therefore, two approaches to characterise the behaviour are investigated in Section 4. Failure and Fracture Behaviour An exemplary post-experimental failure pattern for each investigated material configuration is shown in Figure 4. An influence of the strain rate on the observable failure patterns could not be identified. The displayed images are therefore considered to be representative across the investigated strain rate domains. For both the 12-and the 13-configuration, nonlinearities dominate the stress-deformation behaviour. In both the 13-and the 31-configuration, the predominant failure mode is of an interlaminar nature. This results in delaminations, which in turn lead to catastrophic failure and specimen separation in the 31-configuration. A more in-depth assessment of the observed phenomena and their origins is given in [15], where a 2/2-twill weave reinforcement is made of commingled hybrid GF/PP yarn and experimentally investigated in a similar manner, without addressing strain rate effects. Their findings are consistent with the ones obtained in this study for MKF reinforcement. Material Modelling and Property Identification Two approaches to model the experimentally identified material behaviour are pursued: 1. modulus and strength based (MaS): An analysis based on ASTM D5379 [24] and DIN EN ISO 14129 [25] with the determination of engineering constants at fixed strain values. 2. data driven (DD): A closed formulation describing the entire experimental curves by coherent formulae with the determining parameters being the material's characteristics. In both cases, the causes of the nonlinearities are not further investigated for the modelling, since all experiments were carried out with continuously increasing strain, making a differentiation of intrinsic effects impossible. Determination of Engineering Constants Four slopes, referred to as shear moduli, were determined for each experiment (see Table 1): three secant moduli G sec (between γ = 0.15 % and γ = 0.55, 1 and 2 %) and a tangent modulus G tan 5 % at γ = 5 % determined between γ = 4.8 and 5.2 %: The determined parameters are arithmetically averaged over the conducted experiments for each strain rate and material configuration. They are summarised in Table 2, and respective maximum stress values of the experiments are given additionally. Strain Rate Dependency and Model Parameters The effects of the shear strain rate on the material parameters G sec 0.55 % and stress levels τ max for the respective shear directions 12, 13 and 31 within the considered strain rate range are presented in Figure 5. The remaining values at 1 and 2 % exhibit similar behaviour. In order to incorporate the increase of those parameters with the strain rate, a model originally proposed by JOHNSON and COOK [18], which is widely used in literature to model the strain rate dependency of model parameters, e.g., [26][27][28], is chosen: The strain rate dependency was found to be accurately described by this model. In (4), A G and A τ denote model parameters, which control the linear slope (in the log plot) of the model's predictions over the considered shear strain rates. These parameters are determined by a best fit approach. G re f and τ re f are the values at reference shear strain ratė γ re f . It is worth emphasising that this type of natural logarithm-formulation may result in unreasonable negative values when the model is evaluated far below the reference strain rate. Special attention has to be paid to this in implementations for FEA, since large local jumps in strain rate may occur, leading to negative moduli. With the identified strain rate material constants A G 12,31/13 and A τ 12,31/13 for the corresponding 12, 31 and 13-configurations, the tendency of the rising values of shear modulus and shear stress respectively can be accurately estimated. An overview of the model constants for selected material parameters is given in Table 3. In this regard, it is emphasised that the shear moduli G 13 and G 31 are considered to be identical due to the typical assumption of a symmetric stiffness tensor for these textile reinforced composites. Reference values were taken at the lowest measured shear strain rates to avoid the aforementioned problems at strain rates lower than the reference one. The corresponding model predictions are presented for the initial shear modulus G sec 0.55 % (γ) and the maximum shear stress τ max (γ) in Figure 5. The resulting trajectories at different strains are presented in Figure 6 for the 12-configuration and strain rates of 2.2 × 10 −4 and 3 1 /s. Determination of Material Characteristics Within this subsection, the applied strategy to model the material's behaviour by continuous formulae is exemplary presented using the 12-configuration. For the other configurations, an identical procedure is used unless stated otherwise. In order to obtain a continuous description of the material's behaviour, the slopes along the curves-hereafter referred to as tangent modulus G tan (γ)-are investigated. Therefore, each experimental curve is considered as n pts subsequent stress-strain points and preprocessed by applying a low-pass filter. The tangent modulus at a given strain point γ k , with k < n pts − 1, is subsequently calculated by The results are presented in Figure 7. The curves show a finite value at zero strain and a rapid decay with an asymptotic approach to zero. Given these curve characteristics, an exponential law is chosen as model: with the initial tangent modulus G tan 0 and the decay parameter a. Integration of (6) with the initial condition τ(γ = 0) = 0 yields a closed form for the stress-strain relationship: From this, it becomes clear that the chosen model contains the asymptotic stress τ ∞ , which can be seen for the material configurations 12 and 13 in Figure 3 as Parameter identification for G tan 0 and a is carried out individually for each test by a standard least square fit. In all cases, both parameters exhibit a distinct dependency on the strain rate. The experimental curves in combination with the corresponding model predictions based on the previously identified parameters G tan 0 and a are presented in Figure 8 ("Model, standard"). The respective dependency of the identified model parameters in (7) on the strain rate can be found in Figure 9 ("Standard fit"). In the case of the decay parameter a (Figure 9b), it becomes apparent that there is no simple dependency of the material parameters on the strain rate. In particular, the simultaneous increase and decrease fromγ = 2.2 × 10 −4 1 /s to 3.6 × 10 −3 1 /s pose a major challenge to deterministic models. However, the usage of such an approach is highly desirable to obtain a more generalised and predictive model. Therefore, a second set of parameters is identified. For this second methodology, additional bounds on the fitting parameters are applied: The fits at the lowest tested strain rate are carried out without any restrictions. For all subsequent fits, the lower bounds for the parameters are set to the means of the parameter at the previous strain rate. This ensures that the well-known tendency of stiffness parameters increasing with the strain rate [16,18,19,29,30] is preserved within the resulting parameter set. The model predictions for the stress-strain curves based on the parameters determined by this constrained method are illustrated in Figure 8 ("Model, constrained") and the respective model parameters in Figure 9 ("Constrained fit"). Modelling of the Strain Rate Dependency Given the excellent results of the JOHNSON-COOK-model for strain rate dependency in Section 4.1.2, the same approach is taken to model the strain rate dependency of the material parameters G tan 0 and a: with the values G tan 0,re f and a re f at the reference strain rateγ re f and the material constants A G and A a . The values A G and A a are obtained by a standard least square fit. Given the advantageous generalisation capability of the constrained methodology, the final model with the parametersγ re f , G tan 0,re f and a re f as well as the parameters A G and A a ( Table 4) are obtained by fitting the constrained model's parameters. In Figure 10, the resulting curves are plotted alongside the conducted tests. It can be seen that the model is in excellent agreement with the experimental results for the 12-and 13-configuration. In case of the 31-configurations, sound results are obtained as well, even though failure occurred already at shear strains of approximately 2%. The determined material parameters for the respective material configurations are presented in Table 4. Determining the Range of Validity of the Material Modelling Approaches A criterion to determine the range of validity for the MaS model-exemplary shown for the initial secant shear modulus G sec 0.55 % -is presented. It is intended to provide an indicator under which circumstances the additional effort required for the DD model is expected to yield a significant improvement in the quality of prediction. As a basis, the degree of nonlinearity and the subsequent deviation from a straight line is used for the assessment. The transition from linear to nonlinear behaviour can be described by a defined percentage based deviation of the experimental curve from a given straight line with the initial modulus as slope. However, significant oscillations in the low strain area in some of the experiments at high strain rates are observed. Due to the low absolute values, small deviations with respect to the standard testing accuracies already lead to high percentage changes, resulting in an erroneous immediate prediction of the onset of nonlinearity. Therefore, the criterion has to be robust against such small absolute deviations. Thus, a criterion based on the coefficient of determination r 2 is employed, where a maximum value of 1 indicates perfect conformity. For each experimental point k, the r 2 value is determined for the straight line with the slope G tan 0 approximating the actual experimental curve. In an extension of the definition for an entire data set, e.g., given in [31], this leads to as the coefficient of determination at every point along the curve. Significant nonlinearity is thereby defined as the strain at which the r 2 value reaches the last local maximum since the quality of approximation steadily decreases with proceeding strain. An example for this criterion is presented in Figure 11 up to a strain of 5 %. This method yields good results for the material configurations 12 and 13 with one exception in the 13 case atγ = 0.34 1 /s. Therefore, the respective specimen is not taken into account for further investigations. Furthermore, it becomes clear that no exact boundaries for the range of validity can be specified. However, from a certain point on, the approximation quality of the linear model constantly decreases. Therefore, this point is chosen as a guideline, even though it must be noted that, in individual cases, the linear approximation might be valid for larger or smaller strains. The determined stresses at which initial nonlinearity becomes significant are presented in Figure 12. A distinct dependency on the shear rate may be observed. Similar to the onset of damage, the stress-strain curves show a distinct point at which the behaviour changes back to an almost linear one with a very small slope. This behaviour is attributed to the saturation of underlying effects. The point of nonlinearity saturation is determined analogously to the onset by linear regression, this time starting at the maximum considered strain of 20 % of the specimen and moving on to lower strains. In this case, the first local minimum of the mean squared error, calculated between the linear approximation and the experimental curve, has been found to be a well-suited criterion. The determined stress values R ∞ are shown in Figure 12. Furthermore, it becomes apparent that the stress at which the re-linearisation occurs increases with the strain rate. The area of the stress-strain curve following R ∞ yields great potential for use cases in which the energy absorption is crucial. Strain Rate and Configuration Dependent Experimental Characterisation The investigated GF/PP-MKF shows similar behaviour under the three shear loading conditions 12, 13 and 31 (see Figure 13). In the beginning of the linear regime, the stiffness under 12, 13 and 31 loading is almost identical. This supports the often made assumption of all these kinds of loading being identical for balanced MKF. However, this equality of the curves only holds up to γ ≈ 0.5 %, at which point nonlinearity appears in the 31 case. At γ ≈ 1 %, the 12 and 13 cases show nonlinear behaviour as well. All specimens of the 31-configuration fail in this nonlinear phase. In the other configurations, a re-linearisation occurs. This phenomenon is much more pronounced and progresses much quicker in the 13 case than in the 12. In the 13 case, an almost constant value is reached at γ ≈ 5 %, whereas the nonlinear regime in the 12 case reaches up to γ ≈ 10 %. In both cases, the behaviour stays linear up to strains of approximately 20 %. The experimental design does not allow for a final assessment of the underlying effects leading to the change in principal material behaviour. This would require unloading of the specimen at intermediate loading states, which currently cannot be implemented with the presented test set-up. However, possible causes are discussed in the context of the developed models and existing literature in more detail in Section 5.3. Furthermore, the GF/PP's behaviour shows a distinct dependency on the strain rate. In particular, the initial material stiffness increases strongly with the strain rate, and the overall shape of the curve is only influenced slightly. This unequal impact of strain rate on the different aspects of the material behaviour is also captured by the DD-model: For the 12-and 13-configurations, the ratio of the rate dependency parameters A G /A a is around 3, indicating a three times stronger rate dependency of the stiffness. Presented Modelling Approaches Two approaches to capture the highly nonlinear material behaviour exhibited by the GF/PP under different shear-loading conditions have been investigated. Modulus and Strength Based (MaS) Approach The MaS method is very close to well-known linear elasticity models and can therefore be used almost directly in all FEA-programs. It makes use of established analysis techniques and provides a simple framework. Furthermore, the identification of "material parameters" for constant strain rates is as easy as reading values of experimental data. These aspects greatly facilitate the usage of this approach. It was shown that the results obtained by the MaS-model can yield good results, as long as the analysis is limited to small strains. Within this range, the model captures the dependency of the material's behaviour on the strain rate with high accuracy. However, it leads to non-smooth curves, with the deviation between experimental and numerical curve depending highly on the number of evaluation points. This necessitates a multitude of "material parameters" for high fidelity analyses, due to the large nonlinear regime within the stress-strain curve. The determination of how many parameters are necessary is aggravated by the fact that the model contains no information about its own applicability, i.e., up to which strain the error is acceptable. In addition, the generalisation to arbitrary strain rates becomes increasingly difficult with every "material parameter", since each requires additional parameters to model this dependency. Data Driven (DD) Approach On the other hand, the presented DD approach poses a comparatively high initial hurdle. The underlying model is not necessarily implemented in every FEA-program, requiring a potential user to implement the constitutive model via some subroutine. Furthermore, the material parameters cannot be read directly from the stress-strain curve. It is rather necessary to perform a nonlinear regression analysis of the curve. In this work, two regression techniques were applied and the resulting parameter sets compared. In Figure 8, it can be seen that the model predictions for both parameter identification methodologies are in good agreement with the experimental data and among themselves. Solely at very high strain rates of 3.0 1 /s and high strains of more than 15 %, the constrained model predictions differ notably from the approximation without additional assumptions. Additionally, the trend of rising material parameters with increased strain rate can be captured more accurately by the constrained fit (cf. Figure 9). This allows for the model to be more generalising and predictive. Furthermore, the two physically based curve parameters G tan 0 and τ ∞ barely differ between both fitting approaches. Therefore, the parameters identified by the constrained methodology are suggested for future works. The DD approach tremendously outperforms the MaS method in terms of accuracyas long as the number of data points for the MaS method remains somewhat cappedextensibility to arbitrary strain rates and therefore overall generalisability. This is mainly due to the fact that, within the presented framework, the strain rates dependency of every single material parameter is modelled individually with an additional parameter, doubling the necessary parameters if rate effects are to be considered. Furthermore, it provides a closed and differentiable form of the stress-strain relationship. This is highly beneficial for the numeric time integration schemes implemented in commercial FEA-programs. Interpretability of the Nonlinearities The respective transitions between the aforementioned material states cannot be distinctly attributed to a material phenomenon, based on the experiments conducted. However, other works on similar materials allow for a preliminary analysis of the underlying effects. In the case of the first transition, the determined stress value R 0 marks the point from which, going forward, the prediction accuracy of linear models significantly decreases. Therefore, such models should only be used in cases where it is previously known that stresses will not significantly exceed R 0 . However, given the plateau-like nature of the local maximum, minor overstepping of the boundary could only result in slight discrepancies between the models. On the material side, the fibres' support in the form of the matrix begins to degrade and behave nonlinearly. It has been shown that GF/PP exhibits strongly nonlinear viscoelastic behaviour [32,33], which in itself can lead to significant deviations from a linear stress-strain behaviour at strains of less than 1 % [34]. Another common cause for the nonlinearity in the behaviour of FRP can be found in the occurrence of micro-or macroscopic cracks within the matrix, which lead to a decrease in the material's stiffness [35][36][37]. Additionally, PP is known to accumulate plastic strain even at very low levels of stress [38]. This in turn weakens the support of the fibres and therefore limits the material's load bearing capabilities [39]. Recent studies have shown that, in the case of GF/PP, the viscoelastic behaviour is nonlinear from the beginning and that damage and plasticity are triggered at the same time, for the in plane behaviour at shear stresses as little as 1.76 MPa [19]. The re-linearisation occurring at higher strains suggests that the underlying effects begin saturating. In the case of inter-fibre fracture, this behaviour has been thoroughly investigated in recent years [35,37,40,41]. In that case, the material's stiffness has been significantly reduced by the formation of a multitude of cracks; however, the initiation of new cracks is energetically not feasible. Therefore, some loads can still be transmitted even though they lead to drastically increasing defamation, with a return to the linear stress-strain relation. Conclusions and Outlook The TT behaviour of GF/PP with MKF reinforcement at various strain rates has been determined experimentally by using a lightweight IOSIPESCU test setup in combination with high speed DIC. It was subsequently modelled using a MaS and a DD method. The MaS model has been shown to be mostly suitable for preliminary analyses, reaching only small strain values. Otherwise, the amount of model parameters necessary quickly becomes unfeasible. On the other hand, the DD model represents the entire stress-strain curve by only two parameters. Furthermore, it requires only two additional parameters to generalise to arbitrary strain rates, if the phenomenologically enhanced regression method is used. Therefore, the resulting model is applicable to high fidelity analyses with monotonically increasing loading, a high number of model and gradient evaluations (i.e., in the case of a large number of elements or time steps) and varying or previously unknown strain rate and state. Future research of the following two aspects is expected to offer great potential: firstly, the investigation of the constitutive material behaviour describing the orthotropic shear stress-strain relations. In that respect, models incorporating damage and failure as well as multi scale approaches to directly address the influence of the textile architecture should be studied in detail. In particular, the influence of viscoelasticity and plasticity of the composite constituent materials glass and PP [19,42] are identified to be promising to describe the observable strain rate sensitivity; secondly, the pursuit of DD methods in the field of complex material behaviour. Treating experimental results as pure data with known input and output and thus applying methods from the field of supervised machine learning to them is expected to further facilitate model generation. Additionally, such methods could be employed to completely automatise the parameter identification process and thereby significantly lower the initial hurdle for using new material models. Acknowledgments: The algorithms used for fitting and optimisation are implemented in the open source Python library scipy [43]. Most of the figures presented here were created using the open source Python library matplotlib [44]. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
2022-10-20T15:52:47.160Z
2022-10-17T00:00:00.000
{ "year": 2022, "sha1": "f5473601979118bf867d0f3fa44cf2cc4ca30501", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2504-477X/6/10/318/pdf?version=1666070787", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "64b59a6ee2aa663d032956fd947fe3b74d6ba711", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
110991622
pes2o/s2orc
v3-fos-license
Implementation and interpretation of surface potential decay measurements on corona-charged non-woven fabrics The aim of this paper is to discuss the peculiarities of the surface potential decay (SPD) curves obtained for certain non-woven media. The experiments were performed on samples of non-woven poly-propylene (PP) sheets, which are typically employed in the construction of air filters for heat, ventilation and air conditioning. The samples were in contact with a grounded plane, in order to: (1) ensure better charging and measurement reproducibility; (2) simulate the worst situation of practical interest. They were charged using either a high-voltage wire-type dual electrode or a triode-type electrode arrangement. The aspect of the SPD curves depends on the electrode configuration. When the electric field is strong enough, it can activate charge injection at the insulator-metal interface and extrinsic conduction. Introduction Electret filters consist of non-woven fabrics made of dielectric materials that have a quasi-permanent electrical charge. Electric charging of the media improves their particle collection efficiency by the electrostatic particle capture mechanism that enhances the conventional mechanical filtration without causing an increase of the pressure drop [1]. Thus, the efficiency of an air filter is related to the initial level and the persistence of its electric charge [2,3]. Surface potential decay (SPD) measurement techniques are widely used for the investigation of the electric charge on dielectric surfaces in a wide range of industry applications [4][5][6][7][8][9] Several interesting observations on the corona-charging characteristics of a non-woven poly-propylene (PP) sheet air filter are reported in [10]: the surface potential of the filter media is limited by the local discharges that occur inside the porous sheet and the relative humidity of the ambient air accelerates the charge decay. More recent studies [11,12] confirmed the limitation of the surface potential attained by highresistivity corona-charged fabrics when placed on conducting surfaces. In two previous papers [13,14], the authors employed the SPD technique to evaluate some factors that influence the corona charging of fibrous dielectrics. The critical issue concerning surface potential decay measurements is the interpretation of the curves, since these physical processes can lead to similar responses [4,15,16]. The rate of the potential decay dVs/dt is often considered as a better observable than the potential decay itself. The product of the derivative by the absolute time in seconds (tdVs/dt) versus (log t) characteristic can be used for data analysis in order to separate phenomena with different characteristic times and amplitudes [17,18]. The aim of this paper is to analyse the peculiarities of this mathematical transformation in the case of non-homogeneous dielectrics, such as the nonwoven fabrics. Materials and method The experiments were performed on 100 mm × 85 mm samples of nonwoven sheets of PP (sheet thickness of 300 µm and average fiber diameter of 20 µm, as shown in Fig. 1) in ambient air (temperature of 18 •C-22 •C and relative humidity of 30%-50%). The PP fibres represent roughly 15% of the volume of the media. The samples were charged using the positive corona discharge generated by a high-voltage wiretype dual electrode [13], facing a grounded plate electrode (aluminium, 120 mm × 90 mm), as shown in Fig. 2(a). The high-voltage electrode consisted of a tungsten wire (diameter of 0.2 mm) supported by a metallic cylinder (diameter of 26 mm) and distanced at 34 mm from its axis. The wire and the cylinder were energized from the same adjustable high-voltage supply (model SL 300, SPELLMAN; rated voltage: 50 kV; rated current: 6 mA). Unless otherwise specified, the distance between the wire and the surface of the plate electrode was 30 mm. In some experiments, a grid electrode was interposed between the wire and the plate, to obtain a triode type electrode arrangement [14]. In all the experiments, the samples were charged for 10 s (a duration beyond which no significant increase of the initial surface potential was noticed) by exposing them to the corona discharge, at various values of the high voltage applied to the dual electrode or at various grid potentials, in the case of the triode system. The samples were placed in contact to the grounded place so that to: (1) ensure better charging and measurement reproducibility; (2) evaluate the charge decay in the worst possible conditions. As soon as the high-voltage supply of the corona charger was turned-off, the conveyor belt transferred the samples from Position 1 to Position 2 (Fig. 3), where the surface potential was measured with an electrostatic voltmeter (TREK, model 341B, equipped with a probe model 3450, accuracy: ±(0.1% of full scale, drift with temperature 200 ppm/°C), calibrated before each set of measurements. The measured potential was monitored via an electrometer (Keithley, model 6514) connected to a personal computer (Figure 3). The processing of the data was performed using a virtual instrument in the LabView environment. Figure 3. Experimental set-up for SPD measurement on non-woven fabrics. Results and discussion The results of the SPD experiments carried out with the wire-type dual electrode system for different values of applied high voltage and with the triode-type electrode arrangement for different values of grid voltage are shown in figure 4. The main feature of these curves is a slow decay for low surface potential values. At higher surface potentials, the slope of the curves becomes steeper, leading to the so called "cross-over phenomenon" [4], which is mainly due to charge injection. According to the models described in the literature [18], the crossover can be explained by the proportionality of the initial SPD rate to the square of the charge deposited on the surface the media. The representation tdVs/dt = f (logt) of SPD measurements for corona-charged non-woven fabrics are shown in figure 5 for both electrode arrangements. In the case of the triode electrode system, the basic response curve is obtained for grid voltages of 0.6 kV and 0.9 kV. This intrinsic response of the media occurs in conditions of moderate temperature and electric field, which are factors of injection activation. The response amplitude increases with the grid voltage, and this corresponds to the occurrence of broad peaks on the 1.5 kV, 4 kV, and 6 kV curves ( figure 5, b). At relatively high voltages, this broad peak -which is caused by charge injection from the grounded electrode [18] -is centred at about 10 1.65 s, and is superimposed on the baseline response. This type of injection, which occurs beyond a threshold voltage, is due to the increased electric field strength at the insulator-metal interface [4], so that the charge carriers can cross the potential barrier between the ground electrode and the surface states states of polypropylene fibers, to pass into the conduction band of the dielectric. In the absence of the grid ( figure 5, a), at applied voltages 12 kV and 18 kV, a broad peak replaces the basic response curve. The characteristic feature for this type of corona-charging is the steep peak with a long descent that occurres for an applied voltage equal to 24 kV, at around 10 0.65 s. This can be explained by the intensity of the electric field and the non-uniformity of the charge deposit. When the electric field is strong enough, it can activate the injection of charges and extrinsic conduction [4]. At the same time, the fibrous and non-homogeneous structure of the media, associated with the nonhomageneity of the charge deposite due to the absence of the grid, can cause paths to the ground electrode that will accelerate charge decay at the surface of the samples.
2019-04-13T13:09:47.952Z
2011-06-23T00:00:00.000
{ "year": 2011, "sha1": "e889fe57ce8e3e6f5accd638784d21b8270ddf14", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/301/1/012044", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "7fa2415ca919260c60bad74c649a8ed09b0a40d0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
121371933
pes2o/s2orc
v3-fos-license
Cosmic inflation constrains scalar dark matter1 In a theory containing scalar fields, a generic consequence is a formation of scalar condensates during cosmic inflation. The displacement of scalar fields out from their vacuum values sets specific initial conditions for post-inflationary dynamics and may lead to significant observational ramifications. In this work, we investigate how these initial conditions affect the generation of dark matter in the class of portal scenarios where the standard model fields feel new physics only through Higgs-mediated couplings. As a representative example, we will consider a symmetric scalar singlet coupled to Higgs via . This simple extension has interesting consequences as the singlet constitutes a dark matter candidate originating from non-thermal production of singlet particles out from a singlet condensate, leading to a novel interplay between inflationary dynamics and dark matter properties. Introduction New physics beyond the standard model of particle physics (SM) is strongly implied by a number of cosmological observations, such as the existence of dark matter and baryon asymmetry in our universe. Whenever a theory contains scalar fields, such as the SM Higgs boson, which are light and energetically subdominant during cosmic inflation, the inflationary fluctuations generically displace the fields from their vacuum values generating a primordial scalar condensate (Enqvist, Meriniemi, & Nurmi, 2013;Enqvist, Nurmi, Tenkanen, & Tuominen, 2014;Starobinsky & Yokoyama, 1994). These specific out-of-equilibrium initial conditions may then affect physics also at low-energy scales and lead to significant observational ramifications. ABOUT THE AUTHOR Tommi Tenkanen is a member of a group which main research activities concentrate on dynamics of quantum fields during and after cosmic inflation, on (p)reheating and on quantum gravity. The connection between standard model Higgs boson and new physics is of particular interest to the group. The research reported in this paper is based on earlier work conducted by the group and its collaborators. PUBLIC INTEREST STATEMENT By studying dynamics of quantum fields in the very early universe, we find a novel connection between the cosmic inflation and dark matter properties. This connection severely constrains some theoretical models which aim to explain the origin of dark matter. The connection also means that the study of gravitational waves may provide an interesting new probe on dark matter properties in the near future. In this proceeding, based on Nurmi, Tenkanen, and Tuominen (in press) and first presented in From Higgs to Dark Matter 2014 conference, we investigate how the presence of scalar condensates affect the generation of dark matter in the class of portal scenarios where the standard model fields feel new physics only through Higgs-mediated couplings. As a representative example, we will consider a Z 2 symmetric scalar singlet s coupled to Higgs via Φ † Φs 2 . We show that with these values of the portal coupling, it is possible to slowly produce a sizeable fraction of the observed dark matter abundance via singlet condensate fragmentation already at temperatures above the electroweak (EW) scale. This severely constraints the standard freeze-in scenario and requires earlier model computations to be revisited. Field dynamics during and after inflation The scalar sector of the model is specified by the potential where Φ is the usual standard model Higgs doublet and s is a Z 2 -symmetric real-singlet scalar. Usually, the exact value of the self-interaction s is considered to be irrelevant for the dark matter production but we shall see that it plays an important role in determining the total dark matter yield. If the scalar fields are light 2 during cosmic inflation and their energy density is subdominant, the mean fields are subject to acquire large fluctuations around the minima of their potential. Using the so-called stochastic approach (Starobinsky & Yokoyama, 1994), we find that the typical scalar field values at the onset of the post-inflationary era are Enqvist et al. (2014) provided that sh ≲ √ s h . Here H * is the Hubble parameter value at the end of inflation. We take the results (2) as inflationary predictions for the initial values of scalar condensates. Assuming an instant reheating, the Higgs acquires a large thermal mass m 2 h ≃ 0.1T 2 and quickly decays into other SM particles (Enqvist et al., 2014). As the scalars relax toward their vacuum values, they open up additional channels for the production of singlet particles. As we shall see, these channels may easily compete with the low-energy particle production. In the following, we will concentrate on the regime where the portal coupling take a value sh ≲ 10 −7 and where the singlet never thermalizes above the EW scale. As the singlet s becomes massive, 3 s s 2 ≃ H, it starts to oscillate about the minimum of its potential. Ignoring the decay processes, the equation of motion in a flat FRW space reads where V � ≡ dV∕ds and s 0 denotes the envelope field value of the homogeneous condensate. (1) When the singlet oscillates in a s s 4 potential, the solution to Equation 3 is where r is the tensor-to-scalar ratio measuring the energy scale of inflation and T is the SM bath temperature. In a m 2 s s 2 potential, the solution for the envelope reads As the singlet oscillates about the minimum of its potential, a transition from the quartic to quadratic regime may take place. This happens approximatively as 3 ss 2 0 ∼ m 2 s , corresponding to If the singlet is light, m s ≲ 0.5 GeV, the potential is essentially given by s s 4 down to the EW scale. The dominant decay channel for the singlet condensate is the perturbative production of singlet particles directly from the condensate s 0 (t). In the quartic regime, 3 s s 2 0 ≫ m 2 s , the corresponding singlet particle production rate is given by (see e.g. Ichikawa, Suyama, Takahashi, and Yamaguchi, 2008); In the quadratic regime, the singlet particle production directly from the condensate is kinematically blocked. Dark matter production The total number of produced particles can be calculated by writing the effective Boltzmann equation for the number density of singlet particles, n s , Here s 0 is the singlet condensate background field value given by Equation 4 in the quartic regime and by Equation 5 in the quadratic regime, Γ s 0 is the condensate decay rate given by Equation 7, f s and f h are the singlet and Higgs phase space densities, respectively, and f s,bg is the singlet condensate phase space density chosen such that s 0 = m s,eff n s,bg . The effective mass m 2 s,eff is equal to 3 s s 2 0 (T) in the quartic regime and to m 2 s in the quadratic part of the potential. The corresponding present abundance is where T = T EW is a natural cut-off for high-temperature processes. Note that the potential is given by s s 4 down to the EW scale only if m s ≲ 0.5 GeV. In the quadratic regime, the solution to Equation 8 can be written as: where (in units of GeV) and where we fix the coefficient C 1 so that the two solutions, Equations 10 and 13, match at the moment of transition (6). The corresponding present abundance then is In order to correctly determine the present DM abundance, one should also take the energy density in the singlet condensate into account. We find that the condensate's contribution to the observed value of DM abundance is given by The final dark matter yield above the EW scale is depicted in Figure 1 for different values of m s and the parameter r. The result severely constrains the viable region where a frozen-in scalar can act as a DM particle. Conclusions In this work, we have studied the implications of formation of scalar condensates during cosmic inflation. The formation of such condensates is a generic feature of a theory where the scalar fields We have found severe constraints on the freeze-in scenario where the total dark matter abundance is given by presence of a singlet scalar condensate and singlet particles, produced mainly by out-of-equilibrium decay of a singlet condensate. Usually, the energy density of the present remnants of the condensate is the dominant contribution to the observed DM relic density. For the maximum observationally allowed inflationary scale, r ≃ 0.1 (BICEP2 Collaboration & Planck Collaboration, 2015), we find that the possibility of a frozen-in scalar as a DM candidate is strictly constrained. We also find that in many cases, the portal coupling sh between the singlet scalar and SM Higgs is required to be super-feeble, sh ≲ 10 −12 , in order not to produce too much dark matter particles already at high temperatures. Contrary to the standard freeze-in scenario, the singlet self-coupling s is found to play a crucial role in determination of the correct DM abundance. The result constrains also those models in which the frozen-in scalar acts only as a mediator and decays further to the actual DM particle. For these reasons, all the standard freeze-in scenarios need to be revisited. The study of formation of scalar condensates and its implications on post-inflationary dynamics has also revealed a novel connection between inflationary dynamics and observed dark matter abundance, meaning that the study of primordial tensor perturbations may provide an interesting new probe on dark matter properties in the near future. 3
2018-12-05T08:46:42.185Z
2015-04-17T00:00:00.000
{ "year": 2015, "sha1": "c2ec247072f3ab21bedc045745c88d7e5d48a22f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/23311940.2015.1029845", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "c2ec247072f3ab21bedc045745c88d7e5d48a22f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
49657853
pes2o/s2orc
v3-fos-license
A Review of Different Word Embeddings for Sentiment Classification using Deep Learning The web is loaded with textual content, and Natural Language Processing is a standout amongst the most vital fields in Machine Learning. But when data is huge simple Machine Learning algorithms are not able to handle it and that is when Deep Learning comes into play which based on Neural Networks. However since neural networks cannot process raw text, we have to change over them through some diverse strategies of word embedding. This paper demonstrates those distinctive word embedding strategies implemented on an Amazon Review Dataset, which has two sentiments to be classified: Happy and Unhappy based on numerous customer reviews. Moreover we demonstrate the distinction in accuracy with a discourse about which word embedding to apply when. Introduction Semantic vector space models of language represent each word with a real-valued vector. These vectors can be utilized as highlights in multiple applications, for example, data recovery document classification, sentiment classification, parsing, text generation, etc. Word embeddings are in certainty a class of methods where singular words are represented to as realvalued vectors in a predefined vector space. Each word is mapped to one vector and the vector values are found out in a way that takes after a neural network, and subsequently the procedure is frequently lumped into the field of deep learning. Key to the approach is utilizing a dense distributed representation for each word. Each word is represented to by a real-valued vector, often tens or many measurements. This is differentiated to the thousands or millions of dimensions required for sparse word representations, for example, a one-hot encoding. The popular models that we are aware of are, the skipgram method, the CBOW model under the word2vec, the GloVe embedding method. In this work we analyze the different word embedding models, on an Amazon Review Dataset, for our deep learning model, and display the results obtained in the accuracy levels. An Overview of the Different Word Embeddings Embedding Layer: An embedding layer, for absence of a superior name, is a word embedding that is found out mutually with a neural network show on a particular natural language processing task, for example, language modelling or document classification. It requires that document text be cleaned and arranged with the end goal that each word is one-hot encoded. The span of the vector space is indicated as a component of the model, for example, 50, 100, or 300 measurements. The vectors are introduced with small random numbers. The embedding layer is utilized toward the front of a neural network and is fit supervisedly utilizing the Backpropagation calculation. The onehot encoded words are mapped to the word vectors. In the case if a recurrent neural network is utilized, at that point each word might be taken as one input in a sequence.This approach of learning an embedding layer requires a lot of training data and can be slow, but will learn an embedding both targeted to the specific text data and the NLP task. GloVe Embedding:The Global Vectors for Word Representation, or GloVe, calculation is an augmentation to the word2vec strategy for effectively learning word vectors, created by Pennington, et al. at Stanford. Classical vector space models portrayals of words were produced utilizing matrix factorization strategies, for example, Latent Semantic Analysis (LSA) that complete a great job of utilizing global text statistics yet are not in the same class as the educated techniques like word2vec at catching importance and exhibiting it on undertakings like figuring analogies. GloVe is an approach to marry both the worldwide measurements of matrix factorization procedures like LSA with the local context-based learning in word2vec. As opposed to utilizing a window to characterize nearby setting, GloVe builds an express word-context or word co-occurence matrix utilizing statistics over the entire text corpus. The outcome is a learning model that may bring about for the most part better word embeddings. The essential distinction amongst word2vec and GloVe embedding is that, word2vec is a "predictive" model though GloVe embedding is a "count-based" model. Predictive models take in their vectors so as to enhance their predictive capacity of Loss(target word -setting words; Vectors), i.e. the loss of predicting the target words from the context words given the vector representations. In word2vec, this is given a role as a feed-forward neural system and streamlined all things considered utilizing SGD, and so on. Count-based models take in their vectors by basically doing dimensionality reduction on the cooccurrence counts matrix. They first build an extensive network of (words x context) co-occurrence information, i.e. for each "word" (the lines), you count how as often as possible we see this word in some "specific circumstance" (the columns) in a vast corpus. The number of "contexts" is obviously extensive, since it is basically combinatorial in estimate. So then they factorize this matrix to yield a lower-dimensional (word x highlights) matrix, where each row currently yields a vector representation for each word. All in all, this is finished by limiting a "reconstruction loss" which attempts to discover the lower-dimensional representations which can clarify the greater part of the variance in the high-dimensional information. In the particular instance of GloVe, the count matrix is preprocessed by normalizing the counts and log-smoothing them. This ends up being a good thing as far as the quality of the learned representations. Results and Conclusions The methods were implemented on an Amazon Review Dataset,which had almost 1 million words and 0.72 million sentences posted by the Customers. There were two sentiments to be classified: Happy and Unhappy. For each method, the dataset was divided into 70% train data and 30% test data and the training was done with only 2 epochs on CPU. However, for each case it took almost 3-4 hours on an average for each epoch to complete. Embedding without pre-trained weights: The output vectors are not processed from the input information utilizing any mathematical function. Rather, each information number is utilized as the index to get to a table that contains every possible vector. That is the motivation behind why you have to indicate the size of the vocabulary as the primary contention. Embedding GloVe Embedding: The insights of word events in a corpus is the essential wellspring of data accessible to all unsupervised techniques for learning word representations, furthermore, albeit numerous such techniques presently exist, the inquiry still stays with respect to how meaning is produced from these measurements, and how the subsequent word vectors may speak to that significance. We utilize our bits of knowledge to develop another model for word portrayal which we call GloVe, for Global Vectors, in light of the fact that the global corpus insights are caught straightforwardly by the model. The goal of word2vec is to discover word embeddings, given a text corpus. As it were, this is a strategy for discovering low-dimensional representations of words. As an outcome, when we discuss word2vec we are regularly discussing Natural Language Processing (NLP) applications. For instance, a word2vec demonstrate prepared with a 3-dimensional hidden layer will bring about 3-dimensional word embeddings. It implies that, say, "apartment" will be represented by a three-dimensional vector of real numbers that will be close (consider it regarding Euclidean separation) to a comparable word, for example, "house". Put another way, word2vec is a procedure for mapping words to numbers. There are two fundamental models that are utilized inside the setting of word2vec: the Continuous Bag-of-Words (CBOW) and the Skip-gram show. Here the experiment was done only with the CBOW model along with negative sampling. In the CBOW model the objective is to find a target word, given a context of words. In the simplest case in which the words context is only represented by a single word. Conclusion The astonishing actuality was that Embedding with no pre-trained weights had a superior outcome than word2vec with pre-trained weight or GloVe Embedding. This is a territory where additionally tests can be done, most likely an a whole lot greater dataset or for different purposes like text generation. In any case, for sentiment classification in light of Customer surveys, pre-trained weights couldn't satisfy that desires, which can be comprehended by implies for some examination.
2018-07-05T07:17:21.000Z
2018-07-05T00:00:00.000
{ "year": 2018, "sha1": "d78aafdf80690613badfa272dfc0a3ea1a4621f3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d78aafdf80690613badfa272dfc0a3ea1a4621f3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
249924329
pes2o/s2orc
v3-fos-license
QSAR, homology modeling, and docking simulation on SARS-CoV-2 and pseudomonas aeruginosa inhibitors, ADMET, and molecular dynamic simulations to find a possible oral lead candidate Background In seek of potent and non-toxic iminoguanidine derivatives formerly assessed as active Pseudomonas aeruginosa inhibitors, a combined mathematical approach of quantitative structure-activity relationship (QSAR), homology modeling, docking simulation, ADMET, and molecular dynamics simulations were executed on iminoguanidine derivatives. Results The QSAR method was employed to statistically analyze the structure-activity relationships (SAR) and had conceded good statistical significance for eminent predictive model; (GA-MLR: Q2LOO = 0.8027; R2 = 0.8735; R2ext = 0.7536). Thorough scrutiny of the predictive models disclosed that the Centered Broto-Moreau autocorrelation - lag 1/weighted by I-state and 3D topological distance-based autocorrelation—lag 9/weighted by I-state oversee the biological activity and rendered much useful information to realize the properties required to develop new potent Pseudomonas aeruginosa inhibitors. The next mathematical model work accomplished here emphasizes finding a potential drug that could aid in curing Pseudomonas aeruginosa and SARS-CoV-2 as the drug targets Pseudomonas aeruginosa. This involves homology modeling of RNA polymerase-binding transcription factor DksA and COVID-19 main protease receptors, docking simulations, and pharmacokinetic screening studies of hits compounds against the receptor to identify potential inhibitors that can serve to regulate the modeled enzymes. The modeled protein exhibits the most favorable regions more than 90% with a minimum disallowed region less than 5% and is simulated under a hydrophilic environment. The docking simulations of all the series to the binding pocket of the built protein model were done to demonstrate their binding style and to recognize critical interacting residues inside the binding site. Their binding constancy for the modeled receptors has been assessed through RMSD, RMSF, and SASA analysis from 1-ns molecular dynamics simulations (MDS) run. Conclusion Our acknowledged drugs could be a proficient cure for SARS-CoV-2 and Pseudomonas aeruginosa drug discovery, having said that extra testing (in vitro and in vivo) is essential to explain their latent as novel drugs and manner of action. Supplementary Information The online version contains supplementary material available at 10.1186/s43141-022-00362-z. Background Coronaviruses are separated into four kinds: Alphacoronavirus, Betacoronavirus, Gammacoronavirus, and Deltacoronavirus [1]. Many species, including humans, have been shown to suffer respiratory, intestinal, neurological disorders, and hepatic caused by these viruses, particularly Betacoronavirus [2]. The World Health Organization (WHO) named it 2019-novel coronavirus (2019-nCoV) after determining the involvement of coronavirus in COVID- 19 [3] (https:// www. who. int/ emerg encies/ disea ses/ novel-coron avirus-2019). Referable to world health emergencies, the International Committee of Coronavirus Study Group (ICCSG) proposed using the named severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) for 2019-nCoV [4]. Because of the onset of pandemic crises around the world, SARS-CoV-2 has now developed a major community health anxiety [5]. The WHO has labeled COVID-19 a community health matter of global concern because of its speedy spreading and ever-increasing procreation/transmission number [6]. As of August 13, 2021, the number of confirmed cases is 205,338,159 and the number of confirmed deaths is 4,333,094 (https:// www. who. int/ emerg encie ss/ disea ses/ novel-coron avirus-2019). During infection with SARS-CoV-2, the amount of Pseudomonas aeruginosa increases, encouraging inflammation by accelerating the recruitment of inflammatory cells and increasing the level of angiopoietin II (https:// www. who. int/ emerg encie ss/ disea ses/ novel-coron avirus-2019). The protease is one of the numerous products of the SARS-CoV-2 binding target [7,8]. Drugs remain the only therapeutic option for Pseudomonas aeruginosa and SARS-CoV-2, despite efforts to create a vaccine [9]. Due to different medication resistance scenarios around the world, the number of people dying annually from Pseudomonas aeruginosa and SARS-CoV-2 is steadily rising [9,10]. Given the lack of viable medicines and the continual growth in transmission numbers and fatality cases. Computeraided drug discovery (CADD) [11] could be a good strategy to discover hit drugs for Pseudomonas aeruginosa and SARS-CoV-2 treatment. This computer-aided drug design and development technique will cut down on the cost and time it takes to find new therapeutic candidates [12]. Ahmad et al. have reported the docking, molecular dynamic simulation, and MM-PBSA studies of Nigella Sativa compounds to find likely normal antiviral drugs for SARS-CoV-2 treatment [13]. Amin and his coworkers have reported the use of Monte Carlo-based QSAR, virtual screening, and molecular docking study of some inhouse molecules as inhibitors of COVID-19 [14]. Several CADD methods have been used to study and design hit drugs such as anticancer [15,16], monoamine oxidase B inhibitors [17], antimicrobial [18], dengue virus [19], and antidiabetic [20] drugs, etc. To select a chemical compound as a viable treatment, the following in silico technique such as quantitative structure-activity relationship (QSAR), molecular docking simulation, absorption, metabolism, excretion, and distribution (ADME), and dynamics modeling of many drugs from known drugs library are used against the target receptors. In the present research, we executed QSAR studies on some chemical libraries using genetic function approximationmultiple linear regression (GFA-MLR). The best model out of the many generated model will be systematically analyzed. The results gained from these methods were equated for validation. Next, we perform the homology modeling of our query protein, then docking simulation to obtain information about the main interaction types from the built model receptor active pocket. Their drug-likeness parameters of the most beneficial docked compound were assessed via in silico approach. Finally, simulations were executed to assess the dynamic stableness of the docked receptors. The current modeling study would offer understanding into the structural demands of these COVID-19 and Pseudomonas aeruginosa inhibitors and may aid in planning novel drugs. Methods Density function theory (DFT/B3LYP) with the 6-31G+ (d, p) basis sets in Gaussian 09 were used to thoroughly optimize the geometries of the iminoguanidine derivatives (PubChem database accession number AID_131512). The PaDEL v2.20 program [21] was used to calculate the properties for QSAR analysis. The association between one dependent variable (pMIC 50 ) of 25 compounds and various independent variables was studied using GA-MLR statistical techniques. The genetic approximation (GA) technique which is included in QSARINS v2.2.4 [22] was used to perform multiple linear regression (MLR) analysis of the molecular descriptors. By dividing the database into two groups, a training set to construct the quantitative model and a test set to confirm the proficiency of the molded model. All the minimum inhibitory concentration (MIC) activity data in the experiments were first translated to the negative logarithm of MIC (pMIC 50 = −log10 (MIC)). shows the chemical structures of iminoguanidine compounds as well as their activity levels. To test the internal validity of the regression model, we employed the LOO (leave-one-out) approach [23,24]. This (Q2LOO) is the most frequent way of determining a model's inner prediction ability. We used randomized validation [25] (Q2rand, R2rand), root mean square error of the training set (RMSEc), and coefficient of determination to assess model robustness in addition to (Q2LOO). For external validation, we used Q2F1 [26], Q2F2 [27], and Q2F3 [28], as well as the concordance correlation coefficient (CCC) and root mean square error of prediction (RMSEp) as recommended by the Organization for Economic Cooperation and Development (OECD) [29]. QL2OO > 0.5, R2 > 0.6, 0.85 ≤ k ≤ 1.15 or 0.6, 0.85 ≤ k' ≤ 1.15 [30], Q2F1 > 0.5, Q2F2 > 0.5, Q2F3 > 0.5, and CCC > 0.80 are some of the evaluation criteria. Homology modeling To build the initial structure for the molecular docking and MD simulation studies, homology modeling of Pseudomonas aeruginosa and SARS-CoV-2 secondary structure was undertaken. [31], the coordinates for the query structure were assigned from the template structure using pairwise sequence alignment. MODLOOP Server [32] was used to correct irregular secondary structures. The 3D protein structures were then built using MODELLER 10.1 [33]. As a result, the model with the lowest discrete optimized protein energy (DOPE) score was chosen, and the model was then energy minimized (add hydrogen and Gasteiger charge) using Chimera v1.10.2 software with the AMBER FF14SB force field. SAVES server was used to calculate stereochemical characteristics, the atomic model's (3D) compatibility with its amino acid residues, bond lengths, bond angles, and side-chain planarity were all utilized to verify the model's quality. PROCHECK [34] was used to calculate Ramachandran plots to verify the stereochemical quality of modeled protein structures. Verify3D [35] and ERRAT [36] were used to create an environment profile. WHATIF was used to investigate residue packing and atomic contact, whereas WHATCHECK was utilized to calculate the Ramachandran plot's Z Score [37]. Using PyMOL, the RMSD was calculated by superimposing the 3D modeled protein with the template. Structure-based virtual screening and docking To perform molecular docking simulations and virtual screening, we utilized Autodock Vina [38] (Table S1) were used as control drugs against SARS-CoV-2 virus main protease and Pseudomonas aeruginosa proteins, respectively. Molecular dynamics simulations (MDS) MDS is a thermodynamic-based procedure that aids in the investigation of dynamic changes encountered in protein-ligand complexes. To certify the integrity of the ligand-protein combination in our investigation, we used MDS to examine the best ligands screened in previous phases with their corresponding proteins. The molecular docking complexes were simulated using the NAMD 2.13 Win64-multicore version [40], which included the Chemistry at HARvard Macromolecular Mechanics (CHARMM 36) force field [41] and the TIP3P water model. Several co-time approaches were applied, with a 2fs integration time step. The CHARMM-GUI web service [42] was used to produce ligand topology and parameter files, produce psf files of protein-ligand complexes, water box, and neutralize the system with potassium (K + ) and chloride (Cl -) ions. The simulation/production (NPT) ran for 1 ns with 5000 steps of minimization (NVT). The temperature was kept constant at 303 K using a Langevin thermostat. The system's perimeter was surrounded by Results In the current study, about 1500 descriptors from PaDeL v2.20 using DFT (B3LYP/6-31G+(d,p)) were computed. Descriptors compete for space in the 25 compounds studied; on these descriptors, a genetic approximationmultiple linear regression (GA-MLR) was employed. As a result, all descriptors with a low correlation coefficient value concerning the dependent variable were first discarded. Also, descriptors with a correlation coefficient larger than 0.95 are eliminated from our data matrix to reduce ambiguity. The GA analysis selects the remaining descriptors, which are then employed in the creation of MLR models. QSARINS software v2.2.4 [44,45] was used to divide the entire dataset into training and test sets at random. From the training set, the GA-MLR model with the highest coefficients of determination and explained variance in "leave one out" cross-validation prediction, and reasonable ability to predict MIC 50 values of test set chemicals was chosen. The extended QSAR model is given in the equation below: The more important the regression model, the lower the p-value (Table 1), and all of the descriptors' p-values were less than 0.05, indicating that they were statistically significant at the 95% level. Edache et al. [46] stipulated that the descriptors developed in a QSAR model should not be inter-correlated with one another. If descriptors are heavily connected among themselves, the model will be highly unstable. As a result, the developed model is statistically insignificant if the VIF is developed to evaluate descriptor inter-correlation. The VIF values of both descriptors in this model are 1.23 which are less than the threshold value of 10 [47]. Table 1 shows the parameters utilized in the final model have relatively low inter-correlation based on VIF analysis. The mean effect (MF) value was calculated for each descriptor to determine its relative importance and contribution to the model. ATSC1c is a molecular descriptor based on Centred Broto-Moreau autocorrelation with lag 1/I-state weighting. The descriptor is related to pMIC50 in a good way. It is assumed that increasing the ATSC1c descriptor by 76% boosts the bioactivity of drugs or anti-Pseudomonas aeruginosa activity. The final descriptor is TDB9s, which stands for 3D topological distance-based autocorrelation -lag 9/weighted by I-state. A 24% rise in the value of this descriptor increases the inhibitory activity of a compound. Internal and external cross-validation was used to assess the model's predictive potential. The model's results, as well as their regression statistics, are presented in Table S2 and S3. Fig. S1 and S2 present the plots of experimental activity versus predicted activity for the training set and the test set compounds, calculated using model 1. Fitting's criteria, internal validation criteria, and external validation criteria values for the model were judged according to the acceptable threshold [48][49][50]. Furthermore, the residual for the predicted pMIC50 values for both the training and test sets are plotted against the experimental pMIC50 values in Fig. S3 and S4. The model did not show any proportional or systematic inaccuracy since the propagation of residuals on both sides of zero is random (Fig. S3). The residuals calculated using prediction by leave-one-out (LOO) (Fig. S4) confirm the claim [51]. Each component's leverage results can be computed and plotted against standardized residuals, allowing for graphical spotting of outliers and influential compounds in a model. The hat matrix (H's) diagonal elements indicate the molecules' leverages, which may be computed using the formula below: where X is the training set matrix and X T denotes the transpose of X. Fig. S5 and S6 show the applicability zone as a squared region defined by a 2.5 bound for residuals and where p signifies the number of model parameters and n constitutes the number of compounds [29]. Fig. S5 shows that the test set's compound 15, a response outlier and compound 16, a structurally influential outlier is outside of this square area. While in Fig. S6 using prediction by leave-one-out (LOO), compounds 15 and 20 of the training and test set with standardized residuals exceeding 2.5 standard deviation units are response outliers. A structurally influential outlier is compound 16 from the test set, which is not within the cut-off value of h* = 0.5. (2) h * = 3(P + 1)/n Surprisingly, one of the training sets compounds and two of the validation compounds both had leveraged greater than the threshold value and low residuals. As previously established by Jaworska and coworker [52] present, compounds with hat matrix (H's) greater than h* alleviate the model and make it predictive for new compounds that differ structurally from the training set [53]. This is only true when the training compound residuals are low. To ensure that all molecules from the estimate set were within the model domain, we used the Insubria graph [54]. The leverages for prediction set vs predicted values are plotted in the graph (Fig. S7). Based on molecular similarity to the training set compounds (leverage value) and the predicted value of pMIC50, we identified the model's reliable prediction zone with this figure. We discovered that 50% of the molecules in the test set fit into the model's applicability zone. Compounds 12, 16, and 18 were discovered to be beyond the zone. To ensure model quality, the Y-scrambling process was used to confirm the absence of chance correlations in the initial GFA-MLR model. As projected, Fig. S8-S10 shows a satisfactory model was obtained. Homology modeling Homology modeling is typically used to create protein models and follows a set of well-defined and widely acknowledged procedures [55]. During the homology modeling phase, we aim for an experimentally determined structure with the COVID-19 virus main protease and RNA polymerase-binding transcription factor DksA (plasmid) that has a high "sequence identity. " Chain A, 3C-like proteinase (severe acute respiratory syndrome coronavirus 2) target and template PDB I.D: 5R7Y protein sequences were aligned as indicated in Fig. 1A. The homology model of COVID-19 primary protease in association with carmofur was built using crystal structures of chain A, 3C-like proteinase (PDB: 5R7Y) as a template, and then modified by loop modeling. Figure 1B shows an overview of the aligned template and target sequence's projected 3D structure with the alignment calculated using PyMOL molecular viewer yielded an RMSD value of 0.169. In this investigation, the Discrete Optimized Protein Energy (DOPE) score [56], which is included in the MODELLER package and is extensively used to assess the quality of 3D models. The DOPE score values for the SARS-CoV-2 models are presented in Table 2. Models with a lower DOPE score and high molpdf values were regarded as structurally sound and reliable in terms of energy values. The model with a DOPE score of −36285.0 and a molpdf value of 1550.75635 (model 1) was chosen in the case of the COVID-19 virus. The model and templates were superimposed according to the DOPE score profiles as presented in Fig. 2. The long active site loop between residues 10-50, 100-120, and 280-310, as well as the long helices at the C-terminal and N-terminal ends of the target sequence, has relatively high energy, according to the plotted DOPE score profile. This lengthy loop interaction with the region makes up the active sites. Different techniques, such as PROCHECK (Ramachandran plot), PROVE, ERRAT2, and VERIFY 3D, were used to assess the 3D model's structural integrity. The modeled protein's Ramachandran plot (Fig. 3A, B) shows that 93.3% (250 aa) of the total residues are in the most (Fig. 3C) was obtained, and it showed PASS. The ERRAT2 overall quality factor for the COVID-19 model is around 88.26% (Fig. S11A). The overlapping of the structure of transcription factor DksA2 from Pseudomonas aeruginosa and RNA polymerase-binding transcription factor DksA models shows great similarity, possibly due to the homology modeling procedure (Fig. 4A). Ten (10) PDB structures were generated, using MODELLER 10.1, and the best receptor model was chosen based on the DOPE assessment method as presented in Table 3. Figure 4 shows an overview of the aligned template and target sequence's projected 3D structure with the alignment calculated using PyMOL yielded an RMS value of 0.288. The model and templates were superimposed according to the DOPE score profiles as shown in Fig. 5. To evaluate the reliability of RNA polymerase-binding transcription factor DksA models built for docking purposes, we used a Ramachandran plot. These methods identify the Psi/Phi angle distribution in the 3D model within the allowed or disallowed regions. Ramachandran plot (Fig. 6) of the modeled protein represents 94.6% (122 aa) of the total residues in the most favored regions, 3.1% (4 aa) in additionally allowed regions, residues in generously allowed regions is 1.6% (2 aa), and 0.8% (1 aa) residues in disallowed regions, indicating a good quality model. The modeled protein's Verify 3D plot (Fig. 6C) was obtained, and it showed PASS. The ERRAT2 overall quality factor for the RNA polymerase-binding transcription factor DksA model is around 91.667% (Fig. S11B). Molecular docking simulations The selected configurations from the docking result are required in molecular docking simulation to determine the theoretical correctness of the produced complex structure between ligand and receptor. The active site of the modeled SARS-CoV-2 proteinase and modeled RNA polymerase-binding transcription factor DksA was docked by all 25 studied compounds and 8 controls or tested drugs. Within the defined active site, the docking program generates several poses with varied placements. The binding affinity score was used to determine the final ranking of the ligand docking postures. The binding affinity score of all the studied compounds and the control drugs are presented in Table S4. The binding poses of the best ligand and standards with the lowest binding affinity are depicted in 3D and 2D diagrams in Fig. 7. The ligand number 18 has the highest binding affinity against SARS-CoV-2 virus main protease, at −8.7 kcal/mol, followed by the control (Ritonavir) at −8.4 kcal/mol. As illustrated in Fig. 7B, a pi-donor hydrogen bond interaction with the terminal benzene ring was also created. Against modeled RNA polymerase-binding transcription factor DksA model protein, Doxycycline showed better binding affinity than ligand numbers 7, 12, and 15 (Table S4). Doxycycline has the maximum negative binding affinity of −7.2 kcal/mol, followed by Ritonavir with −6.7 kcal/mol. Compounds 7, 12, and 15 have a better binding affinity (−6.5 kcal/mol) than the rest of the studied compounds. From (Fig. 7C-E), compound 7 forms two conventional hydrogen bond interactions with the active site residues Pro109 (4.24 Å) and (5.58 Å), it also forms one unfavorable donor-donor interaction with Asp126 (Fig. 7C). Compound 12 forms five conventional hydrogen bonds and two hydrophobic interactions as presented in Fig. 7D. While compound 15 (Fig. 7E) also have 5 conventional hydrogen bonds with Ser21 (2.67 Å), Asp18 (4.23 Å), Tyr19 (5.06 Å), Ser17 (5.27 Å), and Tyr19 (5.44 Å). A carbon-hydrogen bond with Asp18 (4.32 Å) and two hydrophobic interactions with Pro109 (5.37 Å) and Tyr19 (4.8 Å), respectively. Lastly, the control drugs (Doxycycline) have two conventional hydrogen bonds with Ile125 (4.11 Å) and Gly111 (4.17 Å) and two unfavorable donor-donor interactions with Asp126 and Lys113. The unfavorable interactions found in compound 7 and Doxycycline disqualified them for further analysis. Compound 15 (Fig. 7E) has more hydrogen bonds than compound 12; hence, compound 15 was used for molecular dynamics simulations. SwissADME (http:// www. swiss adme. ch/) was employed to estimate the drug-likeness of our inhibitors, including their ADME inside the body [57]. The SwissADME program's Egan BOILED-Egg method was utilized to determine the inhibitors' absorption in the intestinal system and the brain. The BOILED-Egg (Brain Or IntestinaL EstimateD permeation predictive model), also known as the Egan egg, provides a threshold (WLOGP ≤ 5.88 and TPSA ≤ 131.6) as well as a well-defined graphic illustration of how far a chemical structure deviates from the ideal for optimal absorption [58]. In Fig. 8, the molecules in the white part of this 2D graphical representation are predicted to be quietly absorbed by the gastrointestinal (GI) tract, whereas the yolk area represents chemicals that can passively cross the blood-brain barrier (BBB). None of the chemicals are absorbed by the brain, as seen in the graph. The gastrointestinal absorption of all inhibitors was within tolerable limits (WLOGP ≤ 5.88 and TPSA ≤ 131.6) (Fig. 8). The blue dots (compound 5) indicate molecules that P-glycoprotein is predicted to effluate from the central nervous system (CNS), whereas the remaining compounds (red dots) indicate compounds that P-glycoprotein is predicted not to effluate from the CNS. Figure 9 depicts the bioavailability radar of the compounds for six physicochemical characteristics. The bioavailability radars of compounds 15 (Fig. 9A) and 18 (Fig. 9B) demonstrated a quick assessment of druglikeness. The bioavailability radar takes into account the following six physicochemical characteristics: (1) lipophilicity (XLOGP3 between 0.7 and +5.0), (2) size (molecular weight between 150 and 500 g/mol), (3) polarity (total polar surface area between 20 and 1302), (4) solubility (log S less than 6), (5) saturation (fraction Csp3 less than 0.25), and (6) flexibility (the number of rotatable bonds not more than 9). The pink area reflects the optimal range of these traits [59], while the red line shows each compound's properties. In Fig. 9, the in-saturation of both compounds is visible, whereas the other characteristics are inside the pink area. As a result, we can conclude that these chemicals are expected to be bioavailable when taken orally. The MD simulations of the docked complexes The MDS was executed to assess the constancy of the docked complexes. The complex stability was investigated by calculating the backbone using rootmean-square deviation (RMSD), root means square fluctuation (RMSF), and solvent accessible surface area (SASA). The RMSD of the Cα atoms in the docked complexes was assessed to see the structural deviations all over the simulation trajectory. The complexes reach their stable state after 1-ns which showed structural stability. The RMSD value of the SARS-CoV-2 protein complex is 2.76 Å and that of the Pseudomonas aeruginosa protein complex is 3.47 Å. As shown in Fig. 10A, the fluctuation of the SARS-CoV-2 protein complex was within acceptable range with RMSD less than 3 Å indicating the stability of the protein complex conformation. The fluctuation of the Pseudomonas aeruginosa protein complex (Fig. 11A) exhibited an increasingly RMSD value toward the end of the simulation. To examine the local differences of protein flexibility, the RMSF results were calculated by taking the average of all backbone residues of atoms ( Figs. 10 and 11B). The changes shown below play a significant role in protein complex flexibility, influencing protein-ligand activity and stability. The high RMSF value demonstrates more flexibility, with a maximum level of fluctuation in the residue positions of 400 ps at 1 (Fig. 11B) (Table 4). MD simulation was applied to confirm the reliability of each ligand into the active site of the enzymes. The fresh identified hit compounds formed stable hydrogen bond interactions with the modeled active residues, e.g., Glu299 and Met6 for SARS-CoV-2 main protease (Fig. 10D) and Tyr19 for RNA polymerase-binding transcription factor DksA (Fig. 11D). The MD simulation also supported that each hit compound formed hydrophobic interactions with residues occupying the active site of SARS-CoV-2 main protease and RNA polymerase-binding transcription factor. Eventually, we proposed two-hit compounds as key practical weapons for the COVID-19 main protease and RNA polymerase therapeutics against SARS-CoV-2 and Pseudomonas aeruginosa inhibition, respectively. Conclusion The created 2D-QSAR models' regression statistics demonstrated that they were statistically significant. Furthermore, during fitting's criteria, internal, and external cross-validation trials, relatively low residuals were acquired, showing that the constructed models were predictive. Their satisfactory QL2OO, R2, Q2F1, Q2F2, Q2F3, and CCC values backed up this claim. In docking simulation, compounds 15 and 18 were predicted as the best RNA polymerase-binding transcription factor and SARS-CoV-2 virus main protease inhibitor, respectively (with maximum binding affinity) to be employed as a possible cure orally active drug (based on BOILED-egg and bioavailability radar approach). Molecular dynamic simulations analyze admitting RMSD, RMSF, and SASA analysis affirmed their binding constancy with respective modeled proteins throughout the simulation chronology. Our present exploit can be generative in determining new remedies against SARS-CoV-2 virus main protease and Pseudomonas aeruginosa, having said that general test (in vitro and in vivo) studies are required to test our theoretical analysis.
2022-06-19T05:10:41.135Z
2022-06-17T00:00:00.000
{ "year": 2022, "sha1": "daff707ae6822df46b3725e2c93dad6f25d0e3f5", "oa_license": "CCBY", "oa_url": "https://jgeb.springeropen.com/counter/pdf/10.1186/s43141-022-00362-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e278196877514c70e046d140a5e89d98e4a654d6", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
14100261
pes2o/s2orc
v3-fos-license
Therapeutic benefits and side effects of Azathioprine and Aspirin in treatment of childhood primary arterial stroke Background Childhood primary angiitis of central nervous system (cPACNS) is rare idiopathic vasculitis most frequently in adults. Children with this disorder can present with a range of neurological symptoms and signs including decreased consciousness, seizures, hemiparesis, cranial nerve deficits, and cognitive deficits. Delayed diagnosis and treatment may compromise the outcome. Therapeutic modalities including Anti-Platelet agents, Corticosteroids, Azathioprine, Cyclophosphamide and other Immunomodulatory agents have been used with variable success. Purpose We wanted to study a cohort of children with childhood primary angiitis of Central Nervous System (cPACNS); and evaluate efficacy and safety of their management. Methods Current study is an observational cohort study that included 68 patients admitted with acute ischemic strokes (AIS) within 14 days of symptoms onset at Department of Neurosciences at Children’s Hospital, Lahore, Pakistan from January 2009 to December 2010 with an age ≤16 years. They were subjected to physical examination laboratory and neuroimaging evaluation. They received pulses of intravenous steroids and/or Immunoglobulins for 4 weeks with maintenance dose of Azathioprine and low dose Aspirin for 24 months and kept on follow for 2 years. Results Sixty eight patients were included; 42 (62.76%) boys and 26 (38.23%) girls whose mean age was 8.5 ± 3.5 years. Presenting symptoms and signs included fever (20%), headache (64%), disturbed consciousness (30%), seizures 55%, hemiparesis (60%), and motor deficit (70%). Neuroimaging studies revealed ischemic strokes in 50 patients (73.5%), hemorrhagic strokes in 10 (14.7%) and ischemic-hemorrhagic lesions in 8 (11.8%). Males with, deep coma and raised intracranial pressure were poor prognostic signs. Mortality was encountered in 12 patients (17.64%) with normal outcome in 11 (16.17%), minor disabilities in 14 (20.59%), moderate disabilities in 11 (16.17%) and severe disabilities in 20 (29.41%). Conclusions Characteristic features of cPACNS on presentation may predict later progression and outcome, identify high-risk patients which may guide selection of patients for immunosuppressive therapy. Further studies are required to substantiate our findings regarding immunosuppressive therapy for such patients. Introduction Childhood primary angiitis of the central nervous system (cPACNS) is a rare idiopathic vasculitis diagnosed most frequently in adults. Increased recognition of PACNS and advances in diagnosis of neurological disorders had led to more diagnostic approaches evident from case reports providing enriched clinical and pathological descriptions for cPACNS. cPACNS is a form of idiopathic vasculitis restricted to the brain and spinal cord with slowly progressive course. 1 The true incidence of cPACNS remains unknown. Symptoms and signs of central nervous system (CNS) vasculitis are frequently subtle, subacute and often non-specific in nature. Children with this disorder can present with a range of neurological symptoms and signs including decreased consciousness, intractable seizures, hemiparesis, cranial nerve deficits, and severe cognitive deficits. 2 Delayed diagnosis and treatment may compromise the survival and/or outcome. No consistent laboratory investigations diagnostic and neuroimaging studies are therefore necessary for diagnosis. Hence identification and early diagnosis of children with such disorder is crucial because with standardized treatment good neurological outcome is a realistic goal. 3 Therapeutic modalities including anti-platelet agents, corticosteroids, azathioprine, cyclophosphamide and other immunomodulatory agents have been used with variable success. Moreover, early immunosuppressive therapy might improve the prognosis. 4,5 This study aims to describe the clinical manifestations for a cohort of children presented with cPACNS, report the efficacy and ANNALS R E S A R T I C L E RES ARTICLE Follow up of patients after hospital discharge was maintained for 2 years. Ethical approval for study was obtained. Statistical analysis Data was recorded and statistically analyzed using the Statistical Package for the Social Sciences (SPSS) version 12.0 (Chicago, IL). Frequencies were calculated for qualitative while mean + SD and median were calculated for quantitative data. Results Ninety four patients with clinical diagnosis of childhood Arterial Ischemic Stroke (cAIS) were identified from the total admissions to Department of Neuroscience at Lahore Children's Hospital, Pakistan. Sixty eight (72.3%) met inclusion criteria and diagnosed as cPACNS while 26 (27.7%) had strokes secondary to conditions other than primary pathology of cerebral arteries. These patients were 42 boys (61.76%) and 26 girls (38.23%), their mean age was 8.5 ± 3.5 years (Median age 7.4 years, range 1.5-16 years). Forty two patient (61.76%) were <5 years and 26 (38.23%) >5 years old. Median time between onset of symptoms and/or signs and diagnosis was 12 days (Range 1-18 day). Headache was found in 64% of patients; hemiplegia in 60%; seizure in 55% (focal 30%, generalized 25%) and decreased level of consciousness in 30% (Figure 1). Based on findings of CA and/or MRA 51 patients (75%) had non-progressive and 17 (25%) had progressive arteriopathies. Fifty patient (73.54%) had ischemic strokes and 10 (14.70%) had hemorrhagic strokes while 8 (11.76%) had ischemic-hemorrhagic lesions (Figure 2). Out of the recruited patients, 56 patients completed induction and carried on maintenance therapy as per the approved management protocol (Table 1); 41 (73.21%) of them with Aspirin alone and 15 (26.79%) with combined Aspirin and Azathioprine. Mortality encountered in 12 patient (17.64%), 11 were normal (16.17%), 14 (20.59%) had minor disabilities, another 11 (16.17%) had moderate disabilities and 20 (29.41%) had severe disabilities. Mortalities included 7 males and 5 females, their neuroimaging studies showed 5 with hemorrhagic stroke, 5 with hemorrhagic infarct stroke and 2 with ischemic stroke with progressive arteriopathy. Eight of them had severe bilateral involvement of major cerebral arteries and/or massive parenchymal bleeding. No statistically significant differences were found for age, localization of Acute Ischemic Stroke (AIS) and occurrence of seizures in Table 1: Treatment protocol for childhood Arterial Ischemic Strokes at the Children's Hospital Lahore-Pakistan I-Induction therapy: 5-10 days: ➢ Methyl prednisone: 25 mg/kg intravenous over 4 hours daily for three days and/or intravenous immunoglobulin 400 mg/kg/day over 6 hours for five days. ➢ Oral prednisone: 2 mg/kg daily (maximum 60 mg daily) for 30 days, to be tapered over 30 days. ➢ Supplementary calcium and vitamin d is provided during prednisone treatment. ➢ Heparin (for ischemic strokes, infarction size ≤50% of cerebral hemisphere size); loading dose 75 units/Kg intravenously followed by 20 units/Kg/hour for children over one year of age (or 28 units/Kg/hour for children below one year of age) for 3-5 days followed by oral anticoagulants for 30 days. ➢ Anticonvulsants and antipsychotics as needed. ➢ Antibiotics, antiviral and antacids along with other supportive care as needed. ➢ Management of raised intracranial pressure as needed II-Maintenance therapy: 24 months: ➢ Aspirin 3 mg/kg daily for all ischemic strokes ➢ Aspirin 3 mg/kg and Azathioprine 1 mg/kg daily for progressive arteriopathies. ➢ Anticonvulsants, antipsychotics, nutrients and other supportive care as needed. relation to morbidity and mortality. No secondary hemorrhages were observed among all the ischemic-infarcts patients who were treated initially with intravenous heparin and continued with oral anticoagulants. RES ARTICLE discussion Childhood primary angiitis of the central nervous system (cPACNS) is a reversible cause of severe neurological impairment, intractable seizures and cognitive decline. Once clinically suspected, angiography and/or MRA are key imaging modalities. 7 Epidemiological studies have revealed an annual incidence of 2.5-2.7 pediatric strokes per 100,000 children. This figure comprises of ischemic and hemorrhagic events, and excludes strokes from trauma or birth-related complications. 1 Our study is consistent with retrospective review for cPACNS 8 but this retrospective review was limited to one Pediatric Neurology Department in Punjab and the frequency of stroke cannot be extrapolated to the whole population. Other studies, based on hospital discharge databases, have found higher incidence. 9 Similarly, high incidence was reported from two Saudi hospitals based on admission data and it seems that high incidence in these hospitals was attributed to Tertiary Care Centers providing services to several regions of the country. 10 12,13 Our finding for male dominance (62.5%) in studied patients with pediatric ischemic strokes comes in agreement with other studies. 11,12 The exact explanation for the apparent male predominance in our study and other studies is still unknown. However, an Indian study documented equal sex incidence among children with AIS. 12 Febrile illness was preceding presentation to our hospital in 30% of our studied patients and after admission to our hospital in 20% of them that may be attributed to the high prevalence of infections in our society. Comparable findings were reported by Najaraja et al. (1994) who suggested that viral infections could be a triggering factor for a vascular lesion resulting in thrombosis and leading to vascular occlusion. 14 Headache was one of the presenting symptoms in 64% of our patients which is in agreement with Braun et al. 15 The disturbed consciousness in 26.5% of our patients at the time of hospital admission is in agreement with Adam et al. (2004). 16 Comparable reports for presence of seizures in such patients were found in our patients (55%) and other studies. 7,9,14,17,18 The reported preceding history is suggestive of Transient Ischemic Attacks (TIAs) in 20.6% of our studied patients comes in accordance with Lantheir et al. (2000). 19 In our study neuroimaging studies of the brain showed abnormal findings in 100% of patients and classification of stroke type was found to be; ischemic infarcts in 73.54% of patients; hemorrhagic strokes in 14.70% and hemorrhage-ischemic infarcts in 11.76%. These findings are comparable to Makhija et al. who reported ischemic infarction in 91% of their studied Pediatric patients with strokes. 20 Current treatment strategies for Pediatric AIS are mainly anticoagulation and despite the differences in pathophysiology and outcomes from adult AIS, therapeutic management remains similar because of the paucity of evidence from devoted Pediatric observational studies and clinical trials. 21 In the current study, patients who had hemorrhagic strokes, hemorrhagic-infarct lesions and increased intracranial pressure were treated conservatively and only 4 patients required craniotomy to remove large blood clots in order to lower the intracranial hypertension. On the other hand, 80% of patients with infarct strokes received intravenous Heparin and later on switched to oral Anticoagulants. The remaining 20% of this group had very large infarcts (greater than 50% of single hemisphere) or they presented late and they were carried on Aspirin. Upon hospital discharge, patients with infract strokes were kept on Aspirin, 3 mg once daily, and those with progressive arteriopathy were carried on Azathioprine 1mg/kg/ day to be commenced on 30 th day in addition to aspirin and both drugs were recommended for 2 years with follow up. There is good evidence for the efficacy and tolerability of immunomodulatory therapies in immune mediated neurological disorders such as Guillian Barre Syndrome (GBS), myasthenia gravis, and acute central nervous system demyelination. Despite the data for immunomodulatory therapies, it is limited in cPACNS; Azathioprine has been used successfully in a few case reports of cPACNS. 22 Our patients were carried on the local treatment protocol ( 13 Similar outcome was also reported by Cnossen et al. who found severe neurological impairments in 54% of their studied children after a period of 12 months from hospital discharge. 21 Moreover, other studies have reported that long-term neurologic deficits can occur in 50% to 85% of infants and children after arterial ischemic stroke. [8][9][10][11][12]19 It was also found that infarcts in both hemispheres have been associated with poor outcome, while hemorrhagic infarction, number of infarcts, and size of the artery involved were not predictive factors. 23 It has been postulated that; seizures at stroke onset, altered mental status and complete middle cerebral artery cortical strokes are negative prognostic factors. 19 We believe that in our study the large number of randomly selected consecutive patients is important because in comparison to individual cases or smaller previous case series, our cohort is likely to provide a wider spectrum of clinical findings relevant to advancement of knowledge in the field. Moreover, our study defined progressive and non-progressive forms of cPACNS. Despite the ability to implement appropriate therapy for such patients, it remains high risk for mortality and morbidity that might reflect the need for early diagnosis and initiation of therapy to improve outcome. Conclusion Findings from this study highlight the significant mortality and morbidity of childhood strokes and the impact of early diagnosis and treatment in improving the outcome in such patients. Availability of 24-hour neuroimaging facilities, dedicated acute stroke units for managing childhood stroke are of utmost importance. The use of immunosuppressive therapy in addition to anticoagulants might improve the neurological outcome in children with medium/large vessel childhood primary angiitis
2016-05-17T13:14:13.080Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "c4a59043aba6c10dc08ddd6259abb084779c3805", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc4117149?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c4a59043aba6c10dc08ddd6259abb084779c3805", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211740558
pes2o/s2orc
v3-fos-license
Marketing Strategy of Marine Tourism in The Village of Serangan Denpasar The development of Marine tourism is based on the Community Based Tourism (CBT) concept and the local wisdom of Tri Hita Karana could provide maximum benefits to ecology, socio-cultural continuity and economic improvement. Marine tourism utilizes coastal areas with all its potential biological resources, nonbiological resources, artificial resources, and natural environment. This study aims to examine the strategy of marketing the Marine tourism in Serangan Village with 7P (product, price, promotion, place, people, physical evidence, and process). The data used are primary data which is collected through observation, interviews, questionnaires, with a purposive sampling method that involves stakeholders in Serangan Village. Data obtained were analyzed with qualitative descriptive method. The results of the study showed from internal factors there are sixteen strengths and six weaknesses, and from external factors there are eleven opportunities and six threats. The recommended strategy is by optimizing the potencies and infrastructure, producing more creative and diverse tour package, improving the quality of product and service, managing the security, cooperating with stakeholders in all aspects, creating unique souvenirs of Serangan Village, introducing the importance of seven charm and the website of Marine tourism in Serangan Village. Keywords—component, Marine tourism, coastal area, marketing strategy I. INTRODUCTION Geographically, the location of Serangan Village is very strategic, which is close to the airport and close to some tourism areas such as Kuta, Sanur, Denpasar City and Tanjung Benoa. Its area is 112 Ha, surrounded by sea and 60% is coastal area. It is stated In Law No. 1 of 2014 [1] that small Island is an island with an area equal to or smaller than 2,000 km2 (two thousand kilo meters square) with its ecosystem. The reclamation of Serangan Island caused its area four times bigger to 476 Ha and made the village connects with Bali Island. As a coastal area Serangan has the potential of coastal natural resources, artificial resources, and culture. Natural resources include beaches with white sand, calm sea water, mangrove forests, fish, coral reefs and other Marine biota. A number of potentials include: a) White sand beaches that are very potential to be developed into Marine tourism attractions in the form of tourism businesses in the form of leasing long chairs, beach umbrellas and traditional massage, b) seaweed, c) clear blue sea which adorns the coast of Serangan Island, d) coral garden, e) mangrove forest [2]. The development of tourism is focused on its potential of Marine attraction that has uniqueness, beauty, and value in the form of natural diversity, and man-made products which become the target or destination of tourist visits. The development of Marine tourism is carried out in the context of improving the economy and advancing the quality of life, independence, and welfare of the community. This can be achieved when combining the utilization of all existing potentials, based on local values, involving local communities, maintaining environmental sustainability, customs and culture. Tourism management with a sustainable concept can encourage economic, social and environmental benefits for the country, surrounding area, investment in the tourism industry and society. Furthermore [3] stated that tourism development requires a lot of effort, contributions from all stakeholders with various approaches and improvements to become success. Sustainable tourism consists of three pillars, namely the social, economic and environmental pillars. The three are interdependent with one another, but the focus is on the economic pillar. Coastal areas have been considered as valued recreational locations for hundred of years. [4] identified some benefits of growth of coastal and Marine tourism, which usually involves large groups of tourists that are looking for 'sea, sun and sand' (3S) destinations. Tourism in coastal areas have developed in accordance with alternative tourism which consist of three pillars, namely the social, economic and environmental pillars. The three are interdependent with one another, but the focus is on the economic pillar. Economic development must be able to bring positive impacts on the environment and people's lives. [5] said that large or small scale tourism can become beautiful and economically successful if it has intrinsic superiority. Appropriate management of resources is a need of economic, social and which will maintain cultural integrity, biodiversity and ecological processes [6]. WTO [7] provides a more balanced definition between local residents and tourists and wise use of natural resources, and can meet the needs of business-oriented producers with nature and conservation. This is reinforced by Butcher [8] in Ernawati (2018) arguing that CBT can use the surrounding natural environment as an appeal. This global approach is widely used as a foundation for the development of tourism throughout the world; However, Weaver [9], Buckley [10], and Utting [11] argue that stakeholders face major challenges in applying this model. opportunities for economic, social, cultural and environmental development could be carried out in synergy with tourism development. Dimas Tegar [14] stated that maritime tourism has a great chance of developing maritime economy through increasing the role of the tourism industry, maintaining the environment and increasing the utilization of coastal areas for the community with the concept of The Blue Economy. Wang Fang and Zhu [15] with the concept of "green thinking"; which emphasizes the development of creative Marine tourism products through the use of low carbon costal tourism. [16] which involve stakeholders in Serangan Village. Data collected directly from the source through interviews, questionnaires, observations and FGD (Focus Group Discussion), and were analyzed by qualitative descriptive method. Ancillaries. III. RESULTS AND DISCUSSION The results of the SWOT analysis which is grouped into internal and external factors as shown in table 1 and table 2. The analyzed internal factors in the marketing strategy are described below: • Aspects of marine products consist of four: seafood culinary (strength), Availability of marine attractions, (strength), Beauty of the sea and coast, (strength), Availability of natural conservation as educational tourism (strength) • Price aspect: Price of competing marine tourism packages (strength) • There are four aspects of place obtained: Adequate transportation and roads, (strength), Good means of communication (signals), (strength), Having good and easily accessible road access (strength), Guaranteed tourist location security (power). • Promotion aspects which include: Promotion of the use of online advertising, brochures and print media. (weakness), Travel agent provides marine tourism information clearly (weakness), Being one of the marime tourism icons in Denpasar (weakness), Avalability of information about marine tourism products (weakness), Having a tourist catalog that focuses on marine tourism (weakness). • Aspects of people consist of: Instructors of marine tourism who are competent in their field (strength), Quality of Human Resources (HR) supporting marine tourism, (weakness), Awareness of cleanliness of the marine environment (weakness). • Process aspect: Strategic location (accessibility) (strength). In Table II it can be seen that there are nine opportunities, namely the presence of online news media and social networking sites, interests and trends in choosing marine tourism destinations, modern people's lifestyles, tourist purchasing power, fame of Bali as the world destination, the influence of technology usage, the active role of local government, influence of development on the environment, positive impact on coastal preservation. While the threat is: support of the local community, knowledge of tourists to destination, competition among the marine tourism market is quite high and social, political and religious conditions. Some opportunities that owned by the village have been used by local communities by opening culinary businesses, tourism businesses and trading businesses. The [17][18][19] the increasing tourism will increase the employment of 356 people or 26.84% of the workforce. Culinary tourism is developing very rapidly. Peak visits usually happen on workdays from Monday -Friday at 11.00 -13.00 and late afternoon from 16.00 -09.00 p.m. which is dominated by young people and families. Visitors who come on holidays, Saturday and Sunday are usually from 09:00-11:00 a.m. Based on the results of the analysis, it was found that IFAS score is 3.89 and EFAS is 3.86, then it was in field 1 ehich supports aggressive strategies as shown in figure 1 below. Marketing Strategy of marine tourism in Serangan Village can be seen in Table III. • S-O strategy: Optimizing the potential and utilization of infrastructure, Developing creative tour packages that are more deserve, imitating other maritime tourism that have been developed to improve quality, Creating a structure of special security organizations in tourist areas so that security is more controlled, Collaboration with travel agents must be increased for mutual interest, Increasing marketing communication in the form of online promotions. • S-T strategy: Developing mutually beneficial partnerships in promotion. • W-O strategy: Cooperating and coordinating with BTID managers and managers of Tanjung Benoa marine tourism in the utilization of marine tourism sports areas, making typical souvenirs, making break through to socialize seven charm, made special website for marine tourism in Serangan Village. • W-T strategy: Maintaining the quality of services so that tourists feel comfortable and satisfied, develop local HR skills and expertise through certification training. IV. CONCLUSION AND RECOMMENDATION The conclusions that can be taken are as follows: 1) total strengths are [16] which essentially includes strategic location, road access, tourist attractions, beauty, turtle conservation, facilities, prices, 2) weaknesses include: community support, tourist knowledge, competition and the social, political, and religious conditions of the community's readiness to develop local tourism such as understanding of tourism, comfort, cleanliness, beauty, utilization of coastal potential is its weakness. 3) Opportunities: information technology development, community lifestyles and community trends choose marine tourism, purchasing power, impact on environmental development and the government's active role 4) competition for marine tourism destinations, economic, political conditions, environmental pollution is a weakness. 5) The recommended strategy is by optimizing the potencies and infrastructure, producing more creative and diverse tour package, improving the quality of product and service, managing the security, cooperating with stakeholders in all aspects, creating unique souvenirs of Serangan Village, introducing the importance of seven charm and the website of marine tourism in Serangan Village. The recommendation is to develop a model for developing coastal areas from the tourism income such as returning a small amount of money to local institutions (adat) for development of local values so that it is exist and sustainable.
2019-11-22T00:45:51.810Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "6b142ce3851a0fc521e87150bc625cd5c9f02510", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/icastss-19.2019.47", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "271a58c16c2c75568782ae68afe92e3d71ca1dfe", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
255046635
pes2o/s2orc
v3-fos-license
Effect of Temporarily Opening and Closing the Marine Connection of a River Estuary The lower Rio Grande is a river-dominated estuary that serves as the border between Texas, USA, and Tamaulipas, Mexico. River estuaries encompass the section of the river influenced by tidal exchange with the Gulf of Mexico, but the connection with the Rio Grande is intermittent and can be temporarily open or closed. During the 4.8-year study period, the river mouth was closed 30% of the time, mostly during average or dry climatic conditions, with the temporary closing of the river mouth being linked to hydrology. When the Rio Grande estuary is closed, salinity is low (1.5 psu compared to 4.8 psu when open), nitrate plus nitrite are low (4.4 μM compared to 31.5 μM when open), and ammonium is high (9.6 μM compared to 4.3 μM when open), but chlorophyll is similar (20 μg/L compared to 21 μg/L when open). Benthic macrofaunal abundance and biomass are higher when the river mouth is closed: 16,700 individuals m−2 and 3.3 g m−2 compared to 8800 individuals m−2 and 2.4 g m−2 when the Rio Grande river mouth is open. Benthic macrofaunal community structure is divided into two groups: chironomid larvae and Oligochaeta dominated when the river mouth was closed, whereas polychaetes Mediomastus ambiseta and Streblospio benedicti dominated when the river mouth was open. The implications of these results for managing freshwater flows are that the open and closed conditions each have a characteristic benthic macrofaunal community that is strongly influenced by system hydrology. Introduction River-dominated estuaries are narrow and drain directly into oceans rather than into semi-enclosed bays.Historical studies have stressed the importance of freshwater inflow to estuarine systems and have demonstrated the role of inflow as a major factor driving estuary functioning and health (Chapman 1966;Kalke 1981).Inflows serve a variety of important functions in estuaries, including the creation and preservation of low-salinity nurseries, sediment and nutrient transport, allochthonous organic matter inputs, and movement and timing of critical estuarine species (Longley 1994).The nursery habitat function is facilitated by tidal exchange with the adjacent sea for species with estuarine-dependent life cycles.Studies have also highlighted the importance of river outflow to shelf dynamics (Garvine 1974), fisheries (Aleem 1972), pollution (Tuholske et al. 2021), and connectivity among coastal components (Justić et al. 2021). Estuaries are geologically and hydrologically diverse (Elliott and McLusky 2002;Montagna et al. 2013).Lowflow estuaries often have a connection to the sea that is intermittent, i.e., temporarily open and closed estuaries (TOCE), because low-flow rates can cause the basin to be cut off from its connection to the sea (Allen 1983), and sediment may deposit at the mouth of the estuary due to longshore transport (Hinwood and McLean 2015).The ecological effects of intermittent opening and closing on estuarine organisms are unclear.For example, in Australia, benthic community structure is different in open and closed estuaries of Australia, but open and closed estuaries are also a result of different catchment sizes (Hastie and Smith 2006).Small watersheds lead to a smaller area for rain to flow downstream and lead to lower inflow and a higher likelihood that estuaries are closed (Mondon et al. 2003).So it is possible that catchment size is the primary driver of benthic community patterns (Hastie and Smith 2006).Also, in Australia there were no correlations with physical classifications or linkages at appropriate spatial and temporal scales (Dye 2006).There is a need for further examination of ecological effects in intermittent estuaries. The goal of the current study is to determine the effects of intermittent opening and closing of the Rio Grande, Texas, USA, estuary on benthic communities.Benthic infauna (body length > 0.5 mm) are especially sensitive to changes in inflow and can be useful in determining its effects on estuarine systems over time (Montagna andKalke 1992, 1995;Montagna et al. 2013;Montagna 2021).Benthos are excellent indicators of environmental effects of a variety of stressors because they are abundant, diverse, and sessile.Benthos abundance, biomass, and diversity were measured to assess change over time.The hypothesis is that benthic community structure is different when the estuary is open versus closed estuary as they relate to differences in freshwater inflow and connection to the sea.In addition, relevant water quality and sediment variables (i.e., salinity, temperature, dissolved oxygen, nutrients, chlorophyll, grain size, and sediment carbon and nitrogen content) were measured during each sampling period to assess inflow effects on the overlying water column and sediments, which make up benthic habitat. Field and Laboratory Analyses The Rio Grande was sampled quarterly between 25 October 2000 and 6 August 2005.Previous benthic studies (Montagna and Li 2010) demonstrated quarterly sampling to be effective in capturing temporal benthic dynamics, while economizing on temporal replication.Quarterly sampling occurred every January, April, July, and October.The timing of the sampling captures the major seasonal inflow events and temperature changes in Texas estuaries (Montagna et al. 2011). Three stations on the lower Rio Grande were chosen between the confluence with the Gulf of Mexico and Brownsville (Fig. 1).Stations A (25° 57.584ʹ N, 97° 13.662ʹ W), B (25° 57.796ʹ N, 97° 12.668ʹ W), and C (25° 57.720ʹ N, 97° 11.105ʹ W) were 12.6 km, 11.3 km, and 5.5 km from the Gulf of Mexico respectively.In April 2002, it was discovered that station C was not on the main channel of the river, but in a secondary meander channel that was situated north of the main channel.A new station, D (25° 57.610ʹ N, 97° 11.089ʹ W), was established in the main channel, approximately 100 m from station C. Sampling at station D began in July 2002 and continued quarterly to July 2005 (Table S1).After being missed in July 2002, sampling resumed at station C in October 2002.One additional station, E (25° 57.953ʹ N, 97° 10.420ʹ W), located 1.8 km (1.1 mi) downstream of station D and 5.1 km (3.2 mi) from the mouth, was added in October 2002.Environmental samples were collected instantaneous and synoptically with benthic samples during each sampling period and all data is available online (Montagna 2022).Salinity (psu), conductivity (mS cm −1 ), temperature (°C), pH, and dissolved oxygen concentration (mg L −1 ) were measured using multiprobe sondes and water quality meters.Measurements were made both at the surface (0.1 m deep) and the bottom (0.1 to 0.2 m above the sediment-water interface).A YSI 6920 multiprobe sonde was used with accuracy as follows: dissolved oxygen (DO) ± 0.2 mg L −1 , temperature ± 0.15 °C, pH ± 0.2 units, depth ± 0.02 m, and salinity greater of ± 1% of reading or ± 0.1 psu.Salinity levels were automatically corrected to 25 °C. Sediment grain size analysis was performed using standard geologic procedures (Folk 1964).A 20 cm 3 sediment sample was mixed with 50 mL of hydrogen peroxide and 75 mL of deionized water to digest organic material in the sample.The sample was wet sieved through a 62-μm mesh stainless steel screen using a vacuum pump and a Millipore Hydrosol SST filter holder to separate rubble and sand from silt and clay.After drying, the rubble and sand were separated on a 125-μm screen.The silt and clay fractions were measured using pipette analysis.Percent contribution by weight was measured for four components: rubble (e.g., shell hash), sand, and mud (silt + clay). The proportions of organic and inorganic carbon and nitrogen content in the sediment were measured, as were carbon and nitrogen isotopes δ 13 C and δ 15 N, using a Finnigan Delta Plus mass spectrometer linked to a CE instrument NC2500 elemental analyzer.The system uses Dumas-type combustion chemistry to convert nitrogen and carbon in solid samples to nitrogen and carbon dioxide gases.These gases are purified by chemical methods and separated by gas chromatography.The stable isotopic composition of the separated gases is determined by a mass spectrometer designed for use with the NC2500 elemental analyzer. Standard material of known isotopic composition was run every tenth sample to monitor the system and ensure the quality of the analyses. Benthos were sampled with a 6.7-cm diameter core tube (35.26 cm 2 ) held by divers or with a coring pole Three replicate cores were taken within a 2-m radius at each station.Cores were sectioned at depth intervals of 0-3 cm and 3-10 cm.Samples were preserved in the field with 5% buffered formalin.In the laboratory, samples were sieved on 0.5-mm mesh screens, sorted, identified to the lowest taxonomic level possible, and counted.Dry weight biomass was measured by drying for 24 h at 55 °C, and then weighing.The carbonate shells of mollusks were dissolved using 1 N HCl and rinsed with fresh water before drying.Abundance and biomass were extrapolated to the number of individuals (ind.) or biomass (g) per m 2 , but diversity metrics were not extrapolated and are reported per sample (1/35.26cm −2 ). Hydrology The approach used to assess temporal trends is to compare periods when the river mouth is open or closed, and to determine climate influences during wet or dry months.Wet and dry month thresholds were determined using freshwater inflow data from a hydrological station ~ 70 km upstream from the benthic sampling stations, near Brownsville, Texas, which is managed by the International Boundary and Water Commission (Station 08-4750.00;www.ibwc.gov/ wad/ DDQBR OWN.htm).Daily flow was smoothed by averaging the 30 days prior to and including each daily flow value.This 30-day criterion was used to account for the lag in benthic response after a freshwater event (Montagna and Kalke 1992).To classify wet and dry periods, the 30-day daily flow means were calculated using the 20-year period from 1985 to 2005.Macrofauna sample dates were deemed to be in "dry" weather conditions if the date sampled was in the lower 25% of 30-day daily flow mean, and in "wet" weather conditions if the date sampled was in the higher 75% of 30-day daily flow mean. Statistical Analyses A one-way block analysis of variance (ANOVA) was run on replicates to test for differences in benthic response among stations and dates.Sampling dates are the main effect because the main goal is to test for differences over time.Stations are blocks because they were incomplete and are mainly a form of replication for date effects.No interaction exists because this is an incomplete block design.Linear contrasts were used to test for differences between open and closed periods, and wet and dry periods.Statistical analyses were performed using SAS software (SAS 2017).For benthic analyses, the sections were summed to a depth of 10 cm, and abundance and biomass were log-transformed prior to analysis.Species diversity was not transformed. Multivariate analyses were used to analyze species distributions and how environmental variables affect distributions.The water column structure and sediment structure were each analyzed using principal component analysis (PCA).PCA reduces multiple environmental variables into component scores, which describe the variance in the data set to discover the underlying structure in a data set.The first two principal components were used.Spearman rank correlations between principal component scores were calculated to examine the relationship between sediment and water column data.All variables were averaged by date-station and standardized to a normal distribution with a mean of 0 and variance of 1 prior to analysis so that the relative scale of each variable did not affect the analysis. Macrofaunal community structure was analyzed with non-metric multi-dimensional scaling (nMDS) analysis using a Bray-Curtis similarity matrix among stations or station-date combinations.The resulting nMDS plot represents the macrofaunal community relationship among stations spatially so that the distances among stations are directly related to the similarities in macrofaunal species compositions among those same stations (Clarke et al. 2014).Relationships within each nMDS were highlighted with a cluster analysis using the group average method, based on Bray-Curtis similarity matrices.Cluster analysis was displayed as similarity contours on the nMDS plots and in dendrograms, both using percentage similarity among factors.Significant differences between each cluster were tested with the SIMPROF permutation procedure using a significance level of 0.05.Data were square root transformed prior to analysis using Primer software (Clarke and Gorley 2015).Overall average salinities among all stations and periods (both open and closed) in the Rio Grande were low (4.0 ± 3.9 (mean ± SD); Table 1), and mean temperatures were high (25.2 ± 4.1 °C).Depths were shallow (0.3 ± 0.3 m).Ammonium levels were lower (5.6 ± 12.4 μM) than nitrite + nitrate (25.8 ± 25.7 μM).The nitrogen to phosphorous ratio (N:P) was 5.3 because phosphate was only 5.9 μM.The average chlorophyll-a concentration was high at 21.0 ± 15.2 μg/L. During The PCA for water quality variables found 48% of the variance is explained by two new variables, PC1 and PC2 (Fig. 3).The first and second principal components (PC1 and PC2) explained 30% and 18% of the variation within the data set.Low temperature and high dissolved oxygen, chlorophyll, and pH were inversely related along PC1, which explains seasonal differences of winter and summer.The PC loading vectors for low-salinity and high dissolved inorganic nitrogen and silicate lined up with PC2, explaining freshwater input, which dilutes salinity while delivering nutrients. The opening and closing of the river mouth did not appear to affect water quality because the samples collected when the mouth was open or closed ranged across the entire freshwater inflow axis (i.e., PC2; Fig. 4A).Samples collected in winter and spring often had positive PC1 values and samples collected in summer and fall mostly had negative PC1 values (Fig. 4B).Again, seasonal samples ranged across the entire PC2 axis indicating there were no seasonal effects on the freshwater inflow response.Samples collected during wet periods aligned with the freshwater inflow axis because most wet period samples had positive PC2 values and average and dry periods had negative PC2 values (Fig. 4C).Thus, inflow is related to PC2 (i.e., the salinity/nutrient axis), there is some link between season and PC1 (i.e., the DO/temperature axis), and opened/closed does not appear to be correlated with either PCA axis. The Rio Grande sediments are sandy (52%) with low porewater content (36%) and low total organic carbon (0.6%); nitrogen content was nearly unmeasurable (0.06%) (Table 1).The PCA for Rio Grande sediments explained 85% of the variance in the dataset (Fig. 5A).Mud versus sand content drove the response, explaining 62% of the variance.Percent nitrogen, porewater percent content, and total organic carbon content correlated strongly with mud content.The carbon isotope variable was the only one loading on PC2, which explained 23% of the variance.Sediment samples were collected in fall only.There were no differences among wet periods and dry periods.There was some evidence that river mouth opening influenced sediment characteristics because 6 of 7 samples when the mouth was closed were negative for PC1, meaning they were sandier (Fig. 5B). Benthic macrofaunal abundance, biomass, and diversity varied over time (Fig. 6).All benthic metrics were similar among all stations (Table 2).Abundance averaged about 10,600 ind./m 2 and biomass averaged 2.58 g/m 2 overall (Table 1).Analyses focused on change over time and there were differences in abundance and biomass with opening and closing of the river mouth and with wet periods as compared to average and dry periods (Table 2A).There were higher benthic macrofaunal abundance and biomass when the river mouth was closed (16,678 ind./m 2 and 3.27 g/m 2 compared to 8762 ind./m 2 and 2.36 g/m 2 respectively), but species richness was similar during closed and open periods (4.2 and 3.9 species/sample respectively) (Table 2B).Abundance, biomass, and richness decreased from dry to average to wet climatic periods (Table 2C). Non-metric multidimensional scaling (nMDS) analysis indicates there is complex seriation over time related to the opening and closing of the river mouth and climatic periods (Fig. 7).This difference is illustrated by large shifts from July Benthic macrofaunal diversity in the Rio Grande was low with only a total of 43 species found in all samples, but four of the species were insect larvae or nymphs (Table S1).Of the 43, 11 were dominant under various conditions (Tables 3 and S2).Chironomidae larvae were dominant when the river was closed and conditions were dry.The dominant polychaetes during dry conditions were Mediomastus ambiseta and Streblospio benedicti.S. benedicti was more abundant when the river mouth was open, while Oligochaeta were more abundant when the river mouth was closed.Biotic responses were further linked to abiotic drivers using correlation analysis among the PC scores for samples and the benthic responses at the time (Table 4).Abundance (r = −0.35,p ≤ 0.0010), biomass (r = −0.45,p < 0.0001), richness (r = −0.45,p < 0.0001), N1 diversity (r = −0.31,p ≤ 0.0038), and Hʹ diversity (r = −0.33,p ≤ 0.0025) were inversely correlated with water column PC2, meaning that freshwater inflow was related to decreases in benthic metrics.There were no correlations between water column PC1 and any benthic metric, meaning that season did not drive benthic responses.There were no relationships between the benthic metrices and sediment PC scores; therefore, neither sediment type nor biogeochemical variables drive benthic response in the Rio Grande. Discussion The intermittent nature of the Rio Grande is caused by reduced inflow to the system.During average and dry climatic periods, a sand bar can form at the mouth of the river and block exchange with the Gulf of Mexico.This leads to a transformation of the estuary into a lake.This lakelike effect was evidenced by the decreasing salinities over the course of the study period when the climate was wetter after fall 2003.Following drought, the Rio Grande closed in 2001 and was re-opened in 2003 and 2004 and consequently, salinities returned to estuary conditions (Fig. 2). The Texas coast lies in the northwestern Gulf of Mexico.There is a climatic gradient with decreasing rainfall from the northeast to the southwest that results in decreasing freshwater inflow to the Texas coast (Montagna et al. 2013).Even though the Rio Grande is the southwestern-most estuary along the gradient, it does have lower salinity ranges and higher nutrient and chlorophyll concentrations than the lagoons and bays to the north (Palmer et al. 2011).For example, during the same 5-year period, Rio Grande averages were as follows: salinity 4 psu, ammonium 5.6 µm, nitrite + nitrate 26.0 µm, phosphate 5.9 µm, and chlorophyll 21.0 µg/L; in contrast, Lavaca Bay (an open lagoon-like bay connected to the Lavaca River) averages were During sampling, it was noted that the Rio Grande has a large amount of cyanobacteria and filamentous green algae, which likely adds to the productivity and deposition of detritus to the system.This is supported by the isotope values in sediments.The isotope values of sediments of δ 15 N (7 ppt) and δ 13 C (-20 ppt) indicate the organic matter in sediments is likely derived mainly from deposited algae based on comparative information found in Fry (2006). The higher chlorophyll and nutrients in the Rio Grande are correlated to higher average benthic macrofaunal abundance and diversity compared to permanently open estuaries in Texas such as Lavaca Bay.For example, during the same 5-year period, Rio Grande macrofauna averages were biomass 2.6 g/m 2 and abundance 10,600 ind./m 2 .In contrast, Lavaca Bay macrofauna averages were biomass 0.7 g/m 2 and abundance 3800 ind./m 2 (Palmer et al. 2011).In contrast, macrofauna diversity was identical with both Hʹ of 0.81.This is consistent with the finding that there are differences in macrofauna between open and closed estuaries in New South Wales, Australia (Hastie and Smith 2006). Benthic macrofaunal biomass and diversity in the riverdominated Rio Grande estuary were lower than in other freshwater habitats along the Texas coast.For example, in the Nueces Delta marsh (connected to the Nueces River) biomass averaged 1.7 g/m 2 and Hʹ averaged 0.69 over 1 year between October 1998 and October 1999; however, abundance in the Nueces marsh was higher, 13,100 ind./ m 2 (Palmer et al. 2002).Rincon Bayou is the main stem of the Nueces Delta marsh and a hydrological restoration project was constructed to enhance the connection with the Nueces River to lower salinity (Ward et al. 2002;Montagna , and the averages were salinity 9.7, biomass 0.5 g/m 2 , abundance 4800 ind./m 2 , and diversity 0.5 Hʹ.Thus, long-term average salinity in Rincon Bayou was a little more than twice as high as the Rio Grande, but mean macrofauna abundance, biomass, and diversity were much (4.7 times, 2.3 times, and 1.7 times The macrobenthic community in the Rio Grande was dominated by Chironomidae larvae and Oligochaetes when the river mouth was closed and the polychaetes Mediomastus ambiseta and Streblospio benedicti when the mouth was open (Table 3).Thus, community structure changed over time in a serial fashion (Fig. 7).Wet periods were dominated by Mediomastus ambiseta and Chironomidae larvae.The lack of dominance by mollusks contradicts previous studies, which found freshwater inflow events lead to dominance by suspension feeding bivalve species (Montagna and Kalke 1995;Montagna et al. 2002). Overall, the Rio Grande appears to be more influenced by freshwater inflow than the connection with the Gulf of Mexico.The lack of strong exchange with the Gulf of Mexico in 2001 caused the Rio Grande River to change from an estuarine ecosystem to a freshwater ecosystem, but from late 2002 through 2004 the system returned to brackish conditions.Species diversity increased with increasing salinity.Diversity in river-dominated estuaries was lower than in lagoonal estuaries in Texas (Palmer et al. 2011).These results are consistent with those of previous studies in that species diversity increases from nearly freshwater to seawater conditions (Montagna and Kalke 1992;Mannino and Montagna 1997;Palmer et al. 2002;Ysebaert et al. 2003).One possible mechanism explaining low diversity in small estuaries is due to fluctuating water levels during drought or flood (Adams et al. 1992; Montagna et al. 2018).Another explanation for the low diversity of intermittent estuaries is that food chains are short with high rates of connectivity and high rates of cannibalism (Mendonça, and Vinagre 2018). In the Rio Grande, hydrology had more influence in shaping benthic community dynamics than the intermittent opening and closing of the inlet with the Gulf of Mexico.This is consistent with the studies on benthos from Australia (Hastie and Smith 2006).Not surprisingly, water column dynamics also appeared to be more influenced by river flow and hydrology than the intermittent nature of the estuary.Both the abiotic and biotic responses were driven by hydrological changes over time.The benthic community in the Rio Grande was relatively homogenous with distance from the sea (Table S1).In contrast, studies on intermittent estuaries in New South Wales, Australia, found the heterogeneity with distance from the sea (Dye 2006). Management Implications The Rio Grande is a river with historical significance as well as being the 2000-km-river border between two North American countries.The International Amistad Reservoir separates the river into upper and lower portions that are hydrologically distinct (RGBBEST 2012).The flows from Fort Quitman, Texas (~ 100 km downstream from El Paso, Texas) to the Gulf of Mexico are divided between the USA and Mexico by a 1944 treaty.Flows in the lower Rio Grande have been reduced by reservoirs, and irrigation.The lower 80 km of the Rio Grande is tidally influenced. In 2007, the Texas Legislature passed Senate Bill 3, which required environmental flow standards be developed for major river basins and estuarine systems.A group of scientists named the Lower Rio Grande Basin and Bay Expert Science Team (RGBBEST) was tasked with an environmental flow analysis to recommend an environmental flow regime adequate to support a "sound ecological environment" for the Lower Rio Grande.The RGBBEST (2012) defined a sound ecological environment as one that (1) maintains native species, (2) is sustainable, and (3) is a The current study provides information pertinent to the management of environmental flows to the coast and to the question of reopening tidal connections.Hydrology affects hydrography, meaning the volume of fresh water flowing into an estuary is related to declines in salinity and increased concentrations of nitrate, nitrite, and phosphate, and estuaries are sinks for these nutrients.The implication is that these ecosystems have a characteristic community that is strongly influenced by hydrology of the systems (Lill et al. 2013).Intermittent estuaries will require different approaches for setting environmental flow standards because of the alternating influences of watershed forcing and connections to the sea (Stein et al. 2021).Decisions to reopen a closed river mouth are often based on benefits to improve fisheries, or reduce flooding, nutrients, or algal blooms (Conde et al. 2015).The higher diversity and marine-estuarine community structure indicates it is desirable to maintain the Rio Grande as an open estuary, and this may help restore the historical conditions of the habitat. Fig. 1 Fig. 1 Map of station locations in the Rio Grande, border for Mexico and the USA the first week of February 2001, a sand bar formed and closed the mouth of the Rio Grande, stopping exchange with the Gulf of Mexico.The mouth was artificially opened with a backhoe on 18 July 2001 by the International Boundary and Water Commission (US State Department); however, it closed again in November 2001.The mouth of the Rio Grande was manually opened again on 9 October 2002 at Boca Chica Beach but closed on 15 October 2002.On 2 November 2002, a large rainstorm occurred near the river mouth, east of Brownsville, Texas, which caused enough flow pressure to breach the berm, restoring exchange between the river and the sea.The Rio Grande mouth has remained open from that date to the end of the study period (Randy Blankenship, personal communication, 20 May 2003).The mouth was open when the Rio Grande was sampled in late November 2002.Based on available reports, the river mouth was not blocked during the sampling period (October 2002 to July 2003), in fact heavy rain occurred in October to November 2002 that delayed sampling of stations C and E for a month (Fig. 2A).Salinity change over time is a function of both Gulf of Mexico exchange and river flow.River flow was low from the beginning of the sampling period in 2000 until late September 2003 (Fig. 2B).A series of large flood events occurred from September to November 2003, April to July 2004, September to October 2004, and July 2005. 2003 (03-07) to November 2003 (03-11), which was a transition from an open-average period to an open-wet period; and from April 2002 (02-04) to July 2002 (02-07), which is a transition from a closed-average period to a closed-dry period.The two most different samples (i.e., the furthest left versus the furthest right) were July 2002 during a closed-dry period and November 2003 during an open-wet period. Fig. 2 Fig. 2 Time series of salinity averaged over all stations at each sample period and river hydrology.A Salinity mean and standard deviation during sampling events.B Daily flow rates with blocks when the Rio Grande mouth was open or closed to the Gulf of Mexico Table 1 Overall sample means for variables measured at all stations in the Rio Grande from October 2000 to August 2005 as follows: salinity 16 psu, ammonium 1.4 µm, nitrite + nitrate 3.6 µm, phosphate 1.3 µm, and chlorophyll 8.8 µg/L. Table 4 Relationship between benthic metrics and principal component (PC) scores.Abbreviations: Stat, statistic; r, Spearman correlation coefficient; P, probability; n, number of samples Based on a combination of historical conditions, altered hydrology, and temporarily open and closed estuary conditions, the RGBBEST (2012) determined that the Lower Rio Grande was an unsound environment.
2022-12-24T16:07:57.190Z
2022-12-22T00:00:00.000
{ "year": 2022, "sha1": "22bf2b5e38e9e65dd356a9b7aefdca05f933cf7e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12237-022-01159-6.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "53a2630bdfaedb7b7fb7f6bad7ffa1e8cf5020d6", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
55011390
pes2o/s2orc
v3-fos-license
SPATIAL CONTINUITY OF ELECTRICAL CONDUCTIVITY , SOIL WATER CONTENT AND TEXTURE ON A CULTIVATED AREA WITH CANE SUGAR Spatial variability of soil attributes affects crop development. Thus, information on its variability assists in soil and plant integrated management systems. The objective of this study was to assess the spatial variability of the soil apparent electrical conductivity (ECa), electrical conductivity of the saturation extract (ECse), water content in the soil (θ) and soil texture (clay, silt and sand) of a sugarcane crop area in the State of Pernambuco, Brazil. The study area had about 6.5 ha and its soil was classified as orthic Humiluvic Spodosol. Ninety soil samples were randomly collected and evaluated. The attributes assessed were soil apparent electrical conductivity (ECa) measured by electromagnetic induction with vertical dipole (ECa-V) in the soil layer 0.0.4 and horizontal dipole (ECa-H) in the soil layer 0.0-1.5 m; and ECse, θ and texture in the soil layers 0.0-0.2 m and 0.2-0.4 m. Spatial variability of the ECa was affected by the area relief, and had no direct correlation with the electrical conductivity of the saturation extract (ECse). The results showed overestimated mean frequency distribution, with means distant from the mode and median. The area relief affected the spatial variability maps of ECa-V, ECa-H, ECse and θ, however, the correlation matrix did not show a well-defined cause-and-effect relationship. Spatial variability of texture attributes (clay, site and sand) was high, presenting pure nugget effect. INTRODUCTION Precision agriculture requires determination and analysis of spatial and temporal variations of production factors, especially of the soil.These studies assist in determining specific management sites (SIQUEIRA; SILVA; DAFONTE, 2015;SIQUEIRA et al., 2016a), enabling variable rate input applications and determination of appropriate time of application, thus increasing crop yield (SILVA et al., 2013). Thematic maps are among the main tools used to assess factors affecting crop development.Maps are used in precision agriculture to manage spatial and temporal variability of crop factors, guiding specific agricultural practices to improve efficiency of input application, reducing production costs, impacts on the environment (MOLIN; RABELO, 2011;GUO;MAAS;BRONSON, 2012;ALVES et al., 2013), and soil compaction caused by machinery traffic. Shaner, Farahani and Buchleiter (2008) also emphasized the importance of EC a to determine sites for specific soil management, due to its correlation to different soil physical and chemical attributes that affect crop yield. The determination of the EC a measured by electromagnetic induction is related to different soil properties because its readings are the result of the interactions between soil porous spaces, which are filled with air or water, interactions between soil particles, and structure state (SIQUEIRA; SILVA; DAFONTE, 2015;SIQUEIRA et al., 2016a).Thus, information on the correlations of EC a measured by electromagnetic induction to other soil properties in different types of soil and crops is important (SIQUEIRA; SILVA; DAFONTE, 2015;SIQUEIRA et al., 2016b). Electromagnetic induction is an important alternative to evaluate EC a , since it is a noninvasive technique that evaluate EC a in the soil profile through multiple readings (ABDU; ROBINSON;JONES, 2007). The objective of this study was to assess the spatial variability of the soil apparent electrical conductivity (EC a ), electrical conductivity of the saturation extract (EC se ), water content in the soil (θ % ) and soil texture (clay, silt and sand) of a sugarcane crop area in the State of Pernambuco, Brazil. MATERIAL AND METHODS The experiment was carried out in an area of about 6.5 ha of the Santa Teresa sugar and alcohol industry, in Goiana, Zona da Mata Norte, State of Pernambuco, Brazil (07°34'25''S, 34°55'39''W and average altitude of 8.5 m) (Figure 1).(2013).Textural classification of the soil (Table 1) was determined using the methodology recommended by EMBRAPA (2011). The climate of the region is tropical humid type As', i.e., hot and humid, according to the classification of Köppen, with a rainy season from autumn to winter, annual average precipitation of 1,924 mm and annual average temperatures of 24 °C. The study area has been used for rainfed sugarcane (Saccharum officinarum L.) crops, grown as single-crop, with straw burning before harvesting, since 1988.The crop area had been renewed in the 2010-2011 crops season; the soil was plowed, harrowed, grooved, limed, and fertilized and the sugarcane variety RB867515 was planted. Ninety sampling points were randomly chosen in the study area (Figure 2) and georeferenced with a GPS device with differential correction to subsequent data collection of the soil texture (clay, silt and sand), electrical conductivity of the saturation extract (EC se ) and water content.Samplings were carried out in January 21, 2014, with texture, EC se and water content determined in the soil layers of 0.0-0.2 and 0.2 -0.4 m.Field evaluations of soil apparent electrical conductivity (EC a ) (mS m -1 ) was carried out using an electromagnetic induction device (EM38) (GEONICS, 1999), which measures the horizontal dipole (EC a -H), with readings within the soil layer 0.0-0.4m and the vertical dipole (EC a -V), with readings within the layer 0.0-1.5 m, following the procedures described by Siqueira, Silva and Dafonte (2015) and Siqueira et al. (2016b). Field evaluations of the volumetric water content in the soil (θ % ) in the soil layers 0.0-0.2 and 0.2-0.4m was carried out using a transmission line oscillator (Hydrosense ® , Campbell Scientific Australia Pty. Ltd.), which has a probe that emits an electromagnetic signal in the soil and evaluates how many times the signal returns in a certain period of time (SIQUEIRA et al., 2015). Laboratory evaluations of the soil texture (clay, silt and sand) (g kg -1 ) and EC se (dS m -1 ) were carried out in samples of the soil layers 0.0-0.2 and 0.2-0.4 m.The samples were air dried, disaggregated, sieved in a 2 mm mesh sieve.Soil texture (g kg -1 ) was determined with a densimeter and EC se by the saturated paste extract method, following the procedures described by EMBRAPA (2011). The means of the attributes were subjected to the main statistical procedures (mean, median, standard deviation, coefficient of variation, skewness and kurtosis).The normality of the data was evaluated through coefficients of skewness and kurtosis and histograms of frequency distribution.The coefficient of variation (CV, %) was classified as low (<12%), intermediate (12% to 62%) and high (>62%) (WARRICK; NIELSEN, 1980).The linear correlation between the attributes was determined with significance level of 1% using the Shapiro-Wilk test, including the relief data of all sampling points to assess the effect of relief on the variables.Statistical analyzes were performed using software R 3.3.1 (R CORE TEAM, 2016). Spatial dependence analysis was performed by adjusting of the experimental semivariogram, based on the assumption of stationarity of the intrinsic hypothesis (VIEIRA, 2000;SIQUEIRA et al., 2015).Spatial autocorrelation between neighboring sampling points was calculated by the semivariance γ(h), which is estimated by the Equation (1), in which N (h) is the number of experimental pairs of observations Z (x i ) and Z(x i + h) separated by the distance h. The software Surfer 11.0 was used to develop maps of spatial variability.Isoline maps were developed when the pure nugget effect was detected to compare the attributes, using the Surfer's default parameters, which is based on a linear interpolation model by kriging. RESULTS AND DISCUSSION According to the mean and median analysis (Table 2), the data of all variables tended to normality.However, the analysis of frequency distribution graphs (Figures 3 and 4) showed different distributions (symmetrical and asymmetrical).The coefficients of skewness and kurtosis were different than 0 and 3, thus, the data did not show normal distribution.EC a -V = soil apparent electrical conductivity measured by electromagnetic induction with vertical dipole in the soil layer 0.0-0.4,EC a -H = soil apparent electrical conductivity measured by electromagnetic induction with horizontal dipole in the soil layer 0.0-1.5 m, EC se = electrical conductivity of the saturation extract, SD = standard deviation; CV = coefficient of variation (%).The means of EC a -V and EC a -H were different.According to Siqueira, Silva and Dafonte (2015) and Siqueira et al. (2016a), the largest differences in soil apparent electrical conductivity, measured by electromagnetic induction (EC a -V and EC a -H) are due to soil relief, water rate fluctuation, water content, texture and organic matter content.The water content in the soil and soil texture varied in both depths, explaining the greatest differences between EC a -V and EC a -H.Moreover, 80% of the EC a -V readings were directly related to the EC a -H readings, as also found by Geonics (1999) and Siqueira, Silva and Dafonte (2015).Therefore, despite the different means of EC a -V and EC a -H, these variables are correlated. EC a -V (mS m -1 ), EC a -H (dS m -1 ) and EC se (dS m -1 ) means were different.However, despite representing the same soil attribute, they were evaluated through different methods and expressed in different scales.Their major differences were due to evaluation method, since the electromagnetic induction method is assessed in the field, considering the soil electric current flow as a three-dimensional body, encompassing a larger volume of soil (consisting of porous spaces, water and mineral particles), and EC se is determined in laboratory, under controlled conditions, using disturbed soil samples, with readings that consider only the salts of the soil solution.(SIQUEIRA et al., 2014;SIQUEIRA;SILVA;DAFONTE, 2015). The coefficient of variation (CV%) of clay and sand was classified as low (< 12%); water content in the soil (θ % ) and EC se had intermediate CV (12% to 62%); and EC a -V, EC a -H and silt content had high CV (> 62%). According to the frequency distribution histograms (Figure 3), most attributes had lognormal distribution, however, geostatistical analysis can be carried out despite the data normality (VIEIRA, 2000). The frequency distribution graphs for EC a -V and EC a -H showed leptokurtic positively skewed distribution, i.e., there were many low EC a -V and EC a -H, thus, their mode and median were close and their means were overestimated.The EC se of the soil layer 0.0-0.2m also had leptokurtic positively skewed distribution, whereas the EC se of the soil layer 0.2-0.4m had normal frequency distribution, with slightly trend to a negatively skewed distribution.The histograms for EC a -V, EC a -H and EC se was probably affected by the relief, as reported by Siqueira, Silva and Dafonte (2015), who found the relief affecting the water flow in the soil and consequently, the EC a -V, EC a -H and EC se . The water content in the soil (θ % ) had lognormal frequency distribution, also with overestimation of the mean and leptokurtic positively skewed distribution.This result was expected, since the water flow and distribution in the soil favor the formation of sites with high and low water content as a function of relief, as reported by Siqueira et al. (2015).The frequency distribution histograms for water content in the soil showed very elongated tails, confirming that the water content varied, showing areas of high and low water content along the landscape of the study area. Only the data of silt, from the texture attribute, had lognormal frequency distribution in both soil layers.Data of clay and sand had normal distribution, with more homogeneous histograms and less elongated tails, resulting in more stable means. According to the geostatistical analysis (Table 3), most of the texture attributes had pure nugget effect (PNE), denoting a small scale spatial variability, i.e., at distances smaller than that chosen by random sampling.Only the model for clay content of the soil layer 0.2-0.4m fitted to the experimental semivariogram. The spherical model was fitted to the semivariograms of EC a -V, EC a -H and θ (0.0-0.2); and the Gaussian model to EC se in both layers.The spherical model fit to the semivariograms for most of the attributes, confirming reports of other authors, who describe this model as that that best fit to the attributes of the soil (CAMBARDELLA et al., 1994;VIEIRA, 2000;SIQUEIRA;SILVA;DAFONTE, 2015;SIQUEIRA et al., 2016a). The highest range (a) (m) was found for the EC se in the soil layer 0.2-0.4m (199 m) and the lowest, for the EC a -H in the soil layer 0.2-0.4m (57 m). According to the classification of Cambardella et al. (1994), the attributes evaluated had strong (< 25%,) and moderate (25 to 75%) spatial dependence index.Siqueira et al. (2015) evaluated the spatial variability of soil attributes with different scales and found high SDI (%) for water content in the soil (%) at different soil depths (0.0-0.2, 0.2-0.4 and 0.4-0.6 m).Differences in spatial dependence index were due to the soil natural variation and relief of the study area. Parameters of models fitted to the experimental semivariogram of EC a -V and EC a -H showed a similar spatial pattern, fitting a spherical model.The EC se spatial pattern was different, especially by fitting a Gaussian mathematical model.This result was due to the scalar magnitude and because readings were performed in undisturbed (EC a -V and EC a -H) and disturbed (EC se ) soil samples. According to the linear correlation matrix (Table 4), the relief was significantly correlated at 1% of probability (Shapiro-Wilk test) only to EC a -V (| r | = 0,815) and to EC a -H (r = 0.826). According to the spatial variability maps (Figures 5 and 6), the EC a -V (Figure 5A) and EC a -H (Figure 5B) had similar distribution of the contour lines, explaining their high correlation (|r| = 0.940).Moreover, the device used reads a same volume of soil, and vertical dipole readings (EC a -V) are affected by the soil surface layer, which was evaluated by the horizontal dipole (EC a -V) (CORWIN;LESCH, 2003LESCH, , 2005;;SIQUEIRA;SILVA;DAFONTE, 2015). The spatial variability maps of EC a -V, EC a -H, EC se and θ showed no similar patterns (Figure 5), confirming their low spatial correlation (Table 4).However, these maps followed a same trend pattern, as shown in the relief map (Figure 1).Therefore, the spatial distribution of the attributes (EC a -V, EC a -H, EC se and θ) is affected by relief.According to Siqueira, Silva and Dafonte (2015) and Siqueira et al. (2015), soil declivity is the factor that most affect water distribution and consequently, the distribution and interaction of other soil attributes. Spatial variability maps of texture (clay, silt and sand) (Figure 6) in the soil layers 0.0-0.2 and 0.2-0.4m (Figure 6) showed no spatial relationship with the maps of EC a -V, EC a -H, EC se and θ (Figure 5), confirmed by the low values of linear correlation (Table 4). The spatial distribution maps of soil texture (clay, silt and sand) showed great difference in contour lines, denoting high spatial variability.All texture attributes had pure nugget effect (PNE), except the clay at 0.2-0.4m (Table 3).These maps were developed by linear interpolation to compare spatial patterns, even with PNE, since cartography is a classical science, and data with PNE processed by geostatistics are usually not properly analyzed.Thus, the PNE of texture attributes was due to the high variability of the data along the landscape, affected by different soil formation factors (SIQUEIRA;SILVA;DAFONTE, 2015;SIQUEIRA et al., 2015). CONCLUSIONS Spatial variability of the soil apparent electrical conductivity measured by electromagnetic induction (EC a -V and EC a -H) was affected by relief and had no direct correlation to the electrical conductivity of the soil saturation extract (EC se ). The soil attributes evaluated had frequency of distribution with overestimated means, and means distant from the mode and median. The area relief affected the spatial variability of EC a -V, EC a -H, EC se and θ, however, the correlation matrix did not show a well-defined causeand-effect relationship. Spatial variability of soil texture attributes (clay, site and sand) was high, presenting pure nugget effect. Figure 1 . Figure 1.Topographic map of the study area. Figure 3 . Figure 3. Histograms of frequency distribution of the soil attributes evaluated. Figure 4 . Figure 4. Histograms of frequency distribution of the soil attributes evaluated. 1 Figure 2. Location of the sampling points in the study area. Table 2 . Descriptive statistics of attributes of an orthic Humiluvic Spodosol of sandy texture.
2019-04-02T13:14:00.449Z
2018-04-06T00:00:00.000
{ "year": 2018, "sha1": "0f8c39f5b3d4642220fb985897b28e3a9289c854", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1590/1983-21252018v31n220rc", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e5941eb149f4b043c30bca3c46ba208fbc2958ab", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
119679520
pes2o/s2orc
v3-fos-license
Classical and Quantum Systems: Alternative Hamiltonian Descriptions In complete analogy with the classical situation (which is briefly reviewed) it is possible to define bi-Hamiltonian descriptions for Quantum systems. We also analyze compatible Hermitian structures in full analogy with compatible Poisson structures. Introduction In the past thirty years a large number of nonlinear evolution equations were discovered to be integrable systems [1]. It is a fact that almost in all cases integrable systems also exhibit more than one Hamiltonian descriptions, i.e. they admit alternative Hamiltonian descriptions (they are often called bi-Hamiltonian systems) [2]. In connection with quantum mechanics, there have been proposals for studying complete integrability in the quantum setting. [3], [4] If we take the view point of Dirac [5] "Classical mechanics must be a limiting case of quantum mechanics. We should thus expect to find that important concepts in classical mechanics correspond to important concepts in quantum mechanics and, from an understanding of the general nature of the analogy between classical and quantum mechanics, we may hope to get laws and theorems in quantum mechanics appearing as simple generalizations of well known results in classical mechanics", it seems quite natural to ask the question: which alternative structures in quantum mechanics, in the appropriate limit, will provide us with alternative structures available in classical mechanics? In particular, is it possible to exhibit the analog of alternative Hamiltonian descriptions in the quantum framework? As we are interested in the structures rather than on specific applications, it is better to consider the simplest setting in order to avoid technicalities. To clearly identify directions we should take in the quantum setting, it is appropriate to briefly review the search for alternative Hamiltonian descriptions in the classical setting, leaving aside the problem of existence of compatible alternative Poisson brackets which would give rise to complete integrability of the considered systems. The paper is organized in the following way. In Section 2 we deal with alternative Hamiltonian descriptions for classical systems, while in Section 3 the particular case of Newtonian equations of motion is addressed and in Section 4 a meaningful example is discussed in detail. The analog picture in Quantum case is exposed in Section 5 using the Weyl approach for the Classical-Quantum transition. The Schroedinger picture is the framework to study alternative descriptions of the equations of motion for Quantum Systems in Section 6, in the finite dimensional case. The algebraic results obtained there in the search for invariant Hermitian structures are extended to infinite dimensions in the last part of the paper. In particular, in Section 7, some theorems of Nagy are recalled to provide an invariant Hermitian structure and in Section 8, starting with two Hermitian structures, the group of bi-unitary transformations has been characterized and a simple example is used to show how the theory works. Finally, some concluding remarks are drawn in Section 9. With any Poisson Bracket we may associate a Poisson tensor defined by To search for alternative Hamiltonian descriptions for a given dynamical system associated with a vector field Γ on a manifold M , with associated equations of motion we have to solve the following equation for the Poisson tensor Λ: The vector field Γ will be completely integrable if we can find two Poisson tensors Λ 1 and Λ 2 , out of the possible alternative solutions of equation (4), such that any linear combination λ 1 Λ 1 +λ 2 Λ 2 satisfies the Jacobi identity. In this case the Poisson structures are said to be compatible. [11] In particular, constant Poisson tensors Λ 1 and Λ 2 are compatible. Summarizing, given a vector field Γ we search for pairs (Λ, H) which allow to decompose Γ in the following product along with the additional condition (Jacobi identity): When the starting equations of motion are second order, further considerations arise. Alternative Hamiltonian descriptions for equations of Newtonian type We recall that, according to Dyson, [12] , [13] Feynman addressed a similar problem, with the additional condition of localizability; i.e. written in terms of positions and momenta (x j , p j ) the localizability condition reads Thus, the search of Hamiltonian descriptions for a second order differential equation reads Now we have to solve for the pair ({., .}, H): it is clear that the problem is highly non-trivial. However, if we require the localizability and make the additional requirement (Galileian boost invariance) we gain an incredible simplification. Indeed starting with and by taking the derivative with respect to · x k , we find We have obtained that the bracket is not degenerate and the Hessian of H is also not degenerate. We may now use a Legendre-type transformation to go from the Hamiltonian description in terms of H to the Lagrangian description in terms of £ and then the corresponding problem in terms of Lagrangian functions is linearized and we have to solve for £ the following equation Formulated in these terms the problem goes back to Helmholtz. [14] 4 A Paradigmatic Example We shall consider a simple example that will be useful also to discuss the corresponding quantum situation. [15] On M = R 2n , we consider This Γ represents the dynamical vector field of the anisotropic Harmonic Oscillator with frequencies λ k . As ∂ ∂p k ∧ ∂ ∂x k is invariant under the flow associated with Γ it follows that for any constant of the motion F (x, p) the following two-form is invariant, that is. with For the one dimensional Harmonic Oscillator the function provides the most general invariant two-form parameterized by f (p 2 + q 2 ). For instance, gives with exhibiting the Poisson bracket for the new variables expressed in terms of the old ones and showing that the transformation is not canonical. Now we have to stress that the equations of motion are linear in the new variables in addition to the linearity in the old variables. We have obtained that the equations are linear in two different coordinate systems with a connecting coordinate transformation which is not linear. We notice that in each coordinate system, say (p, q) and (P, Q), the following tensor fields are preserved by the dynamical evolution ω = dp ∧ dq , g = dp ⊗ dp + dq ⊗ dq , and In each set of coordinates we have alternative realizations of both the linear inhomogeneous symplectic group, preserving the corresponding symplectic structure, and of the linear rotation group. Their intersection yields alternative realizations of the unitary group. All these linear realization are not linearly related. How to formulate an analog picture for the quantum case? At the classical level, the dynamical vector field Γ is a derivation for the associative algebra F (M ) and a derivation for the binary product associated with the Poisson Bracket. Would it be possible to define alternative "Lie brackets" and consider a similar approach also in the quantum setting? Unfortunately this naive approach does not work, when the algebra is associative maximally non-commutative, the Lie brackets compatible with the associative product is necessarily proportional to the commutator, i.e. λ(AB − BA). To have an idea on how to search for alternative descriptions for quantum systems it is convenient to consider Weyl approach to quantization because in this approach the symplectic structure plays a well identified role. Quantum systems in the Weyl Approach Given a symplectic vector space (E, ω), a Weyl system [19], [20], [4] is defined to be a strongly continuous map from E to unitary transformations on some Hilbert space H : with with I the identity operator. Thus a Weyl system defines a projective unitary representation of the Abelian vector group E whose cocycle is determined by the symplectic structure. The existence of Weyl systems for finite dimensional symplectic vector space is exhibited easily and it amounts to the celebrated von Neumann's theorem on the uniqueness of the canonical commutation relations. [21], [22] Consider a Lagrangian subspace L and an associated isomorphism On L we consider square integrable functions with respect to a Lebesgue measure on L, a measure invariant under translations. The splitting of E allows to define e = (α, x) and set x, y ∈ L , α ∈ L * , Ψ ∈ L 2 (L, d n y); it is obvious that W (e) are unitary operators and moreover they satisfy condition (27) with ω being the canonical one on T * L. The strong continuity allows to use Stone's theorem to get infinitesimal generators R(e) such that and R(λe) = λR(e) for any λ ∈ R. When we select a complex structure on E we may define "creation" and "annihilation" operators by setting By using this complex structure on E we may construct an inner product on E as therefore creation and annihilation operators are associated with a Kähler structure on E. [23] The introduction of "creation" and "annihilation" operators is particularly convenient to relate alternative descriptions on the Hilbert space (Fock space) with alternative descriptions on the space of observables. The Weyl map allows to associate automorphisms on the space of operators with elements S of the symplectic linear group acting on (E, ω) , by setting At the level of the infinitesimal generators of the unitary group, we have Remark: As the relation defining U S is quadratic, one is really dealing with the metaplectic group rather than the symplectic one. [24] However we shall not insist on this difference. The Weyl map can be extended to functions on T * L ⇋ E, indeed we first define the symplectic Fourier transform [24] and then associate with it the operator A f defined by Vice versa, with any operator A acting on H we associate a function f A on the symplectic space E by setting this map is called the Wigner map. When A represents a pure state, i.e. a rankone projection operator, the corresponding function is the Wigner function. A new product of functions may be introduced on F (E) by setting We thus find that alternative symplectic structures on E give rise to alternative associative products on F (E), all of them being not commutative. The dynamics on F (E) can be written in terms of this non-commutative product as In this approach it is very simple to formulate the "suitable limit" to go from the quantum descriptions to the classical description by noticing that the limit (when it exists) defines a Poisson bracket on F (E). A different expression for this product involving the Poisson tensor Λ is given by where as usual ← − ∂ ∂ξ and − → ∂ ∂ξ act to the left and to the right respectively [25], [26], [27], [28]. Now it is clear that, by using for instance the alternative Poisson brackets we derived for the one-dimensional Harmonic Oscillator in Section 4, we may write In this way we get two alternative associative products on F (E) both admitting Γ, the dynamical vector field of the Harmonic Oscillator, as a derivation. In the same sense for the Schroedinger picture, on the Hilbert space of square integrable functions on the line, we may use either the Lebesgue measure dq invariant under translations generated by ∂/∂q, or invariant under translations generated by ∂/∂Q . Summarizing, by using the Weyl approach, we have been able to show that to search for alternative Hamiltonian descriptions for quantum systems we may look for alternative inner products on the space of states or alternative associative products on the space of observables. In the coming sections we shall investigate the existence of alternative descriptions in the Schroedinger picture. Preliminary results for alternative descriptions in the Heisenberg picture are available in Ref. [3]. Equations of motion for Quantum Systems and alternative descriptions Equations of motion in the carrier space of states are defined by the Schroedinger equation (we set ℏ = 1): Here we shall first restrict ourselves to a finite n-dimensional complex vector space H. The dynamics is determined by the linear operator H. To search for alternative descriptions, we look for all scalar products on H invariant under the dynamical evolution. If we define Γ : H → T H to be the map ψ → (ψ, −iHψ), we have to solve for L Γ h = 0, h representing an unknown Hermitian structure on H. We notice that any h on H defines an Euclidean metric g, a symplectic form ω and a complex structure J on the realification H R of the complex space H: h(., .) =: g(., .) + ig(J., .). The imaginary part of h is a symplectic structure ω on the real vector space H R : ω(., .) := g(J., .). Thus any two of previous structures will determine the third one so defining an admissible triple (g, J, ω). It is clear that L Γ h = 0 is equivalent to L Γ ω = 0, L Γ g = 0, L Γ J = 0, so that we may solve for L Γ h = 0 by starting from L Γ ω = 0. To solve for this last equation we introduce the bi-vector field Λ associated with Poisson Brackets defined by ω in the standard way. [29], [30] The vector field Γ will be factorized in the form The matrix Λ lk satisfies the following conditions As Γ is linear and Λ lk is constant, f H must be quadratic: f H = 1 2 ξ k H km ξ m and therefore if we write Γ = A k l ξ k ∂ ∂ξ l we have the necessary and sufficient condition for Γ to be Hamiltonian in the form and we get the following: Proposition 1 All alternative Hamiltonian descriptions for Γ are provided by all possible factorization of A into the product of a skew-symmetric matrix Λ times a symmetric matrix H. Moreover it is easy to show that the following equivalences hold: where ω stays for the matrix representing the symplectic structure. In Ref. [32] it is shown that a necessary condition for the existence of such a factorization (eq.(54)) for A is that T rA 2k+1 = 0 ∀k ∈ N. Assuming that we have found a factorization for A, say A l m = Λ ls H sm , we may investigate the existence of an invariant Hermitian structure h on H. In the case det A = 0, if H sm is positive definite, we may use it as a metric tensor g to define a scalar (Euclidean) product on H R . Then we can write the polar decomposition of the operator A: A = J|A| where, as usual, |A| is defined as √ A † A. Since KerA = ∅, J is uniquely defined and is g-orthogonal: J † J = JJ † = 1. J has the following properties: i) J commutes with A and |A|: this follows from the fact that J = A 1 To deal with the degenerate case, det A = 0, additional work is needed and can be found in Ref. [31]. Having obtained an invariant complex structure J it is now possible to define an invariant Hermitian structure by using the invariant positive definite symmetric matrix H sm and the complex structure J. All in all we have proven the following: Proposition 2 Any vector field Γ which admits an Hamiltonian factorization into ΛH, preserves an Hermitian structure whenever the Hamiltonian function f H is positive definite. As a consequence, on finite dimensional complex vector spaces, quantum evolutions are provided by Hamiltonian vector fields associated with quadratic Hamiltonian functions, which are positive definite. Because each Hamiltonian function gives rise to an Euclidean product, it is clear that Γ is at the same time the generator of both a symplectic and an orthogonal transformation, therefore the generator of a unitary transformation. Besides, the way J has been constructed out of A may be used to show [31] that the (1 − 1) tensor field associated with the matrix J satisfies the property J(Γ) = −∆, where ∆ is the Liouville vector field ∆ = ξ k ∂ ∂ξ k . By using the dilation ∆ it is possible to write the quadratic Hamiltonian function in the coordinate free form At this point, in complete analogy with compatible Poisson structures [33], [34], [35], we may introduce and analyze a notion of "compatible Hermitian structures" or more precisely compatible triples (g a , J a , ω a ) ; a = 1, 2. We consider two admissible triples (g a , J a , ω a ) ; a = 1, 2 on H R and the corresponding Hermitian structures h a = g a + iω a . We stress that h a is a Hermitian form on H a which is the complexification of H R via J a so that in general h 1 and h 2 are not Hermitian structures on the same complex vector space. Moreover we consider the associated quadratic functions to which correspond vector fields Γ 1 and Γ 2 via ω 1 and ω 2 respectively. Definition Two Hermitian structures h 1 and h 2 are said to be compatible if: Equivalently: we find immediately that [Γ 2 , Γ 1 ] = 0. Moreover, remembering that a given symplectic structure ω defines the Poisson bracket {f, g} = ω(X g , X f ), being i X f ω = df , we derive also where {., .} 1,2 is associated with ω 1,2 . Out of the two compatible Hermitian structures on the real vector space H R we have the following (1−1) tensor fields: G = g −1 1 •g 2 , T = ω −1 1 •ω 2 and J 1 , J 2 . These four (1 − 1) tensor fields generate an Abelian algebra and are invariant under Γ 1 and Γ 2 . It is also possible to prove that The following properties are easy to derive g 1 (Gx, y) = g 1 (x, Gy) = g 2 (x, y) , g 2 (Gx, y) = g 2 (x, Gy) = g −1 1 (g 2 (x, .), g 2 (y, .)) ; (62) Thus we have found [31] that: The (1 − 1) tensor fields G, T , J 1 and J 2 are a set of mutually commuting linear operators. G and T are self-adjoint while J 1 and J 2 are skewadjoint with respect to both metric tensors .and moreover there are orthogonal transformations for both metric tensors Now we can consider the implications on the 2n-dimensional vector space H R coming from the existence of two compatible Hermitian structures. The space H R will split into a direct sum of eigenspaces H R = k H R λ k where λ k are the distinct eigenvalues of G. According to our previous statements, the sum will be an orthogonal sum with respect to both metrics, and in each H R λ k , G = λ k I k with I k the identity matrix in H R λ k . Out of the compatibility condition T will introduce a further orthogonal decomposition of each H R λ k of the form where µ k,r are distinct eigenvalues of T in H R λ k . The complex structures commute in turn with both G and T , therefore they will leave each one W λ k ,µ k,r invariant. Now we can reconstruct, using g α and J α , two symplectic structures. They will be block-diagonal in the decomposition of H R and on each subspace W λ k ,µ k,r they will be of the form (66) Therefore in the same subspaces J 1 = g −1 λ k ) 2 = 1, hence µ k,r = ±λ k and λ k > 0. The index r can then assume only two values, corresponding to ±λ k . All in all we have proved the following: Proposition 4 If two Hermitian structures h 1 = g 1 + iω 1 , h 2 = g 2 + iω 2 are compatible, then the vector space H R will decompose into the double orthogonal sum: k=1,...,r,α=±1 where the index k = 1, ..., r ≤ 2n labels the eigenspaces of the (1 − 1) tensor G = g −1 1 •g 2 corresponding to its distinct eigenvalues λ k > 0, while T = ω −1 1 •ω 2 will be diagonal with eigenvalues ±λ k on W λ k ,±λ k , on each of which As neither symplectic form is degenerate, the dimension of each one of W λ k ,±λ k will be necessarily even. At this point from two admissible triples (g a , J a , ω a ) ; a = 1, 2 on H R we can consider the corresponding Hermitian structures h a = g a + iω a . We stress that h a is a Hermitian form on H a which is the complexification of H R via J a so that in general h 1 and h 2 are not Hermitian structures on the same complex vector space. When the triples are compatible, the decomposition of the space in eq. (67) holds so that H R can be decomposed into the direct sum of the spaces H + R and H − R on which J 1 = ±J 2 , respectively. The comparison of h 1 and h 2 requires a fixed complexification of H R , for instance H 1 = H + 1 ⊕ H − 1 . Then using orthonormal basis {e k + } and {e k − } we can write It is apparent that h 2 is not a Hermitian structure as it is neither linear nor antilinear on the whole space H 1 . Now it is possible to consider the case of a field Γ which leaves invariant both the compatible triples. As a result, the direct sum decomposition of the space in eq. (67) is invariant under the action of Γ. Moreover the field Γ is generator of both bi-orthogonal and bi-symplectic transformations on H R , therefore generator of unitary transformations on H a , a = 1, 2. Searching for invariant Hermitian structures In this section we would like to investigate the equation when the carrier space is some infinite dimensional Hilbert space. As it is well known, in many physical instances, the dynamical vector field Γ entering Schroedinger equation is associated with unbounded operators. It follows that the search for solutions of L Γ h = 0 is plagued with difficult domain problems. It is convenient therefore to search for Hermitian structures solutions of the equation i.e. for Hermitian structures invariant under the one parameter group of linear transformations describing the dynamical evolution. We may consider, in more general terms the following problem: Given an invertible transformation T : H → H, under which conditions there exist an invariant Hermitian structure h such that h(x, y) = h(T x, T y). (72) As it is well known, in infinite dimensions the topology of the vector space of states is an additional ingredient which has to be given explicitly. We therefore assume that H is a Hilbert space with some fiducial Hermitian structure h 0 , in general not invariant under the action of T . We do require T to be continuous, along with its inverse, in the topology defined by h 0 or by any other Hermitian structure, topologically equivalent to h 0 ,which allows us to consider bounded sets. In the search for invariant Hermitian structures on H, topologically equivalent to h 0 , we have this preliminary result: Proof. In order to h 1 and h 0 define the same topology on H, it is necessary that there exist two real positive constants A, B such that The use of the Riesz theorem on bounded linear functionals immediately implies that there exists a bounded positive and selfadjoint (with respect to both Hermitian structures) operator defined implicitly by the equation The positiveness of G implies G = Q 2 and the thesis follows at once. Now we are ready to state few results which go back to B. Sz. Nagy. [36], [37] We first discuss when a flow Φ(t) is unitary with respect to some Hermitian structure h Φ which is a solution of eq. (71). In other words we shall establish conditions for eq. (71) to have solutions and as by-product we exhibit how to find some of them when some appropriate conditions are satisfied. Consider an automorphism T of a Hilbert space H with a Hermitian scalar product h 0 , construct the orbits and require that all of them, with respect to the norm induced by h 0 , are bounded sets for any ψ. The use of the principle of uniform boundedness [38] shows that this is equivalent to require that T is uniformly bounded. We recall that the automorphism T on H is said to be uniformly bounded if there exists an upper bound c < ∞ such that For such an operator the following theorem [36] holds: Theorem.(Bela de Sz. Nagy) For a uniformly bounded operator T there exists a bounded selfadjoint transformation Q such that and QT Q −1 = U is unitary with respect to the fiducial h 0 . This implies that Proof. (Sketch) Define the invariant scalar product h T (ϕ, ψ) as the limit, for n going to infinity, of h 0 (T n ϕ, T n ψ) =: h n (ϕ, ψ). This is the limit of a bounded sequence of complex numbers which does not exist in general, at least in the usual sense. Therefore use the generalized concept of limit for bounded sequence, introduced by Banach and Mazur. [39] This generalized limit (denoted as Lim ) amounts to define the invariant scalar product h T as the transformed scalar product h n "at infinity", where T is interpreted as the generator of a Z−action on H. The same approach can be used [36] to deal with an R−action instead of the Z−action so that: Theorem.When the one-parameter group of automorphisms Φ(t) is uniformly bounded, that is there exists a bounded selfadjoint transformation Q such that QΦ(t)Q −1 = U (t) is a one-parameter group of unitary transformations or Φ(t) is unitary with respect to. Example. As a simple example [40] consider the group of translation on the line realized on L 2 (R) with a measure which is not translationally invariant, i.e. where ρ(x) is any function 0 < α < ρ(x) < β < ∞ and denote by h ρ the corresponding scalar product. If the limit lim then it is trivial to compute the Banach limit because it agrees with a limit in the usual sense. In fact by Lebesgue Theorem we have: This shows that the Banach limit gives h T (Ψ, Φ) = ah 0 (Ψ, Φ), i.e. it is a multiple of the standard translation invariant scalar product. Therefore that is Q = a ρ and is unitary in L 2 (R, ρ(x)dx). Having discussed few results on the existence of invariant Hermitian structure we may now look at the problem of compatible Hermitian structures. Bi-unitary transformations' group: the infinite dimensional case In quantum mechanics the Hilbert space H is given as a complex vector space, because the complex structure enters directly the Schroedinger equation of motion. It is therefore natural to require that the two admissible triples (g 1 , J 1 , ω 1 ) and (g 2 , J 2 , ω 2 ) share the same complex structure: J 1 = J 2 = J. As we have shown this entails that the two triples are compatible and the corresponding structures h 1 and h 2 are Hermitian on the same complex space H . These Hermitian structures are related by the operator G used before which is selfadjoint with respect to both structures. The operator G generates a weakly closed commutative ring and a corresponding direct integral decomposition of the Hilbert space: where ∆ is the spectrum of the positive bounded and selfadjoint operator G and dσ is the corresponding measure. [41] As G acts as a multiplicative operator on each component space H λ , a straightforward generalization of the results of the finite dimensional case eq.(69) follows: in fact the forms of h 1 and h 2 on H are: where < ϕ, ψ > λ is the inner product on the component H λ . As a result, bi-unitary transformations are: where U (λ) is a unitary operator on the component H λ . [42] In particular, when G is cyclic, each H λ is one dimensional and U (λ) becomes a multiplication by a phase factor: [43] U ϕ = ∆ e iθ(λ) ϕ λ dσ(λ). (83) Therefore in this case the group of bi-unitary transformation is parametrized by the σ−measurable real functions θ on ∆. This shows that the bi-unitary group may be written as U θ = e iθ(G) . where H λ is one-dimensional for the particle in the [0, α] box while is twodimensional for the [−α, α] box. Concluding remarks In this paper we have shown that in analogy with the classical situation it is possible to define alternative Hermitian descriptions for quantum equations of motion. We have not undertaken the analysis to use compatible alternative Hermitian structures to study quantum completely integrable systems, this step will require that our operator algebras are realized as algebras of differential operators acting on subspaces of square integrable functions defined on the real spectrum of a maximal set of commuting operators to be identified as position operators. In the quantum-classical transition that we have mentioned in the introduction we should analyze why the complex structure that plays such a relevant role in quantum mechanics does not show up in the classical limit. These issues will be taken up elsewhere in connection with the quantumclassical transition.
2019-04-12T09:06:59.140Z
2005-04-21T00:00:00.000
{ "year": 2005, "sha1": "9a0426be1aa9b38fe3a51e27a74f044c54253150", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "b89800047026d31039cce4dc90b82898bfe3612f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
250374682
pes2o/s2orc
v3-fos-license
THE SMART WAREHOUSE TREND: ACTUAL LEVEL OF TECHNOLOGY AVAILABILITY . Background: Some phrases become common and contemporary without justification. One such term for business activities is the term smart. In the field of logistics, the trend toward "smart" warehousing is increasingly attracting attention. It is necessary to define it and the stage where intelligence can be achieved using available state-of-the-art technology, to follow the trend of the dehumanization of warehouse and in general manufacturing operations in the direction of Industry 4.0. Methods: The article is based mainly on observational methods, literature review, and document analysis, based on data obtained during the implementation of consulting projects. The subject is limited to warehouses designated to process palletised goods. Results: The available state-of-the-art solutions, like IoT, automation, robots, and communication standards, are close to smart warehouse implementation. But on the other hand, lack of full cooperation between various parties of supply chain and long-term return on investment stand in opposition to implementation. Conclusions: Smart warehouse is the matter of the future. Technology is predominantly achievable, but standardization, universalization and trust are necessary to reach the level of real implementation. Smart solutions are within the reach of a single enterprise, but only in isolation from its microenvironment. INTRODUCTION Smart is the expression that is becoming contemporary and common, in some areas without real technological justification. In the field of logistics, the term smart warehouse is becoming more and more popular, though the development of this technology is far from observation in the field of mobile communication. The definition of the solution and the determination of the stage at which it can be achieved through the use of state-of-the-art technologies are necessary. The literature review points to scientific gaps, whose completion initiated the survey. The scientific objective of the paper is to determine the advancement of warehouse technologies toward achieving a solution defined as 'smart', which should initiate the discussion of smart warehouse implementation. In addition to filling the research gap, the aim of the article is to answer the question to what extent it is possible to implement in practice smart warehouses now and soon, confronting existing or emerging technologies with real needs. The trend of technology replacing humans, directing logistic to the 4.0 level, is accelerating and cannot be ignored, especially by science. The methodological objective of the study is to determine the degree of applicability of available storage technologies against human labour in the context of creating a smart warehouse, taking into account the justification and limitations of its implementation. A missing attempt should be made to verify the applicability of the smart warehouse idea, which is the aim of the paper. Are there technologies that can be implemented to put intelligent warehouses into practice, technologies that will allow intelligent warehousing operations? If so, can synergies be obtained from their . The smart warehouse trend: actual level of technology availability. LogForum 18 (2), 227-235, http://doi.org/10.17270/J.LOG.2021 228 simultaneous functioning? Is the potential implementation consistent? The results of the paper indicate areas of focus in order to implement realistic intelligent technologies that authorise the term smart warehouse use. These should be further researched and given special consideration by practitioners. LITERATURE REVIEW 'In recent years, several studies have proposed and discussed different types of smart warehouses, identified key challenges, and proposed several solutions to cope with these challenges. (…) However, very few studies exist on how smart warehouses are designed and the transition strategy and process to these new types of warehouses" [van Geesta et al. 2021]. It can be understood that there is not any publication with holistic look a smart warehouse solutions and along with the expected or possible advanced technology participation in the warehouse process. A review of the titles and content of some scientific services confirms that "smart warehouse" in literature is treated selectively, there are a few holistic approaches and evaluations of the applied solutions when eliminating unnecessary human work. Even the publication focused on 'smart warehouse' focus on a part of subject matter, for example software, warehouse organization, single technology, or case study (see Table 1). A technology implemented does not "make" warehouse smart, but is often is the main subject of a publication titled with "smart warehouse" phase. It is not easy to define when warehouse can be called smart. Without all of those things, can the solution not be described as intelligent, or are only a few of them essential? The article tries to gather all aspects of smart warehouse and decide . The smart warehouse trend: actual level of technology availability. LogForum 18 (2), 227-235, http://doi.org/10.17270/J.LOG.2021 229 which level (percentage of technology autonomy) can be reached using smart solutions. MATERIALS AND METHODS Of the research methods used, the most significant were observational methods and case studies based on many-annual experiences during research activities and information gathered from contacts with entrepreneurs. Slightly less important was the information obtained from a literature review. The quantitative techniques used included observation and document analysis, based on data obtained during the implementation of consulting projects. Due to the strong relationship between process automation and standardization of handled units, solutions for goods palletized into cuboid units were used for analysis. Multiple types, parameters, and conditions of these units did not allow the assumption of complete standardization of flow objects. RESULTS The definition of 'smart' building depends on respective times -the 'smart house' of 1935 had an electric light in every room [Weiser 1996], later the determinants were TV sets or computers. The definition of smart warehouse from Internet of Things (IoT) perspective was proposed by the IoT Agenda [2019]: "A smart warehouse is a large building in which raw materials and manufactured goods are stored that uses machines and computers to complete the common warehouse operations previously performed by humans". Although dimensions are a secondary matter, dehumanization is one of the most important factors of smart solutions. Technology in place of manual work is not the deciding criterion. Interpretation of smart logistics definition narrows the area slightly by adding state-of-the-art to the specification of technology [Uckelmann 2008]: It enables people to focus on subjects that cannot be delegated, thus requiring more 'smartness'. Smart warehouse is a technology driven logistics solution, where subjects, which can be delegated, are performed by state-of-the-art software and equipment, smart technologies. The spectrum of warehouse smart technologies is wide. First, it includes dedicated software: management systemsof warehouse (WMS, EWM), yard (YMS), forklift fleet (FFM), close related transport (TMS) and finally supply chain (SCM), resource planningof enterprise (ERP) or manufacturing (MRP). Then the manipulation and storing technologies should be included [He et al. 2018], like automatic storage and retrieving systems (AS/RS), conveyors, automated guided vehicles, autonomous machines, robots (here conv+) and the whole spectrum of equipment supporting picking activities [Stoltz and others 2017]. Those are the 'main' technologies, which can be supplemented by further ones. Their "intelligence" is demonstrated by their technological sophistication. Simple solutions, which have existed for many years, are only characterized by reflecting the reality created by humans. Smart solutions themselves create this reality, within the framework defined by the human factor. The Internet connection opens up an additional perspective, such as software-as-aservice (SaaS), cloud computing, cloud data storage, blockchain, and direct intertechnological communication -Internet of Things (IoT) [Čolaković and others 2020]. Communication protocols (such as EDI) or automatic identification (based on barcodes or RFID) cannot be omitted, nor the application of augmented reality (AR). The newest trend, planned to be fully implemented in 30 years, is Physical Internet. The last, but not least, and perhaps the most important, is artificial intelligence (AI), the perspective of developing any smart technology and its independence from the "human factor". 231 The set of technologies is ready to take over the work from warehousemen. Some of technologies can have different share in activity, for example, in case of standardized items picking manual work can be eliminated. More differentiated items lead to more sophisticated solutions. Sophistication is associated with higher expenditures and maintenance cost, which leads directly to the financial effectiveness issue. In case of complicated processes, the financial effectiveness can be lost much earlier. There is probably a technological solution for every warehouse activity, but the level of outlays and daily cost are still a barrier, growing in parallel with technological sophistication. Activities that cannot be dehumanized, for example, road transport, can be identified. We can image autonomous trucks on the motorway, but on manoeuvre yard? We can imagine standardized load units, but standardized autonomous reloading systems for unstandardized units? Of course technologies are available but require pan-companies coordination and decades, not years, to universalize. Road transport and reloading are examples of activities that can be fully "smart" but with a long time horizon. What are the opportunities for available (or soon to be available) technology to take over tasks for workers? An attempt of the answers is given in Table 2, indicating what the possibilities of technology are, abstracting from the level of outputs and the return on investment. The low value of the blue bar indicates the need to use the "human factor", here market with red. However, we as people, cannot trust artificial intelligence. A minimum of supervision is and, one must assume, will be necessary. The handling unusual situations will also be left to the human factor. The incorporation of artificial intelligence into warehouse every activity will theoretically be possible, but will it be financially effective? This assumption gives employees a minimum 10% share in all dehumanized activities. The availability of technology has two minimum aspects. First, technological readiness, existence of proven technology. The second one is practice, when implementation is limited by other factors, like safety or lack of the standards. The example can be road transport, where theoretically technology offers solutions, especially in the case of cars. But autonomous trucks may cause threats, have to deal with congestion, different road conditions, or unpredictable humans, which practically eliminates implementation in the incoming decade. The set of warehouse process activities with the share in estimated workload share for technology and humans is presented in the table. Additionally, there is a stated task for warehouse workers and technological solutions. And finally, technologies (software and equipment) tailored to individual tasks -this part should be treated as proposals or examples because there can be as many versions of the implementation as there are designers. But the technologies available now and in the near future are taken into consideration. The assumption is a possible usage of IoT in the whole process when the equipment and modular packages are adequately prepared. This will ensure the synergy effect. The degree of applicability of available storage technologies against human labour is below the assumed 80% level, especially taking into account the limitations of its implementation. The justification for the implementation of smart warehouse technology is not general and requires a custom prior analysis, particularly in terms of profitability. . The smart warehouse trend: actual level of technology availability. LogForum 18 (2), 227-235, http://doi.org/10.17270/J.LOG.2021 232 Table 3. Evaluation of currently available technologies compared to the requirements of the smart warehouse Source: own work DISCUSSION There are technologies whose implementation makes it possible to apply the intelligent warehouse, and the synergistic effect of their implementation can be expected. Consideration should be given in determining the reasonableness of the overall implementation. There are two main factors, indicating smart technologies. The first is the growing cost of labour, justifying proportionally larger investment outlays. The second one is the excellence of technologies in terms of accuracy, repeatability, consistent quality, and work continuity. On the other hand, Kamali [2019] states the main disadvantages of smart warehouse, including high level of outlays related to several years to reach financial reimbursement, the need for specialised personnel, the risk of whole system stoppages, long-term dependence on particular spare parts, hardware and software providers. Two more important have to be added: the required standardization of turnover items and the fact that every complex technological system is a prototype, prone to faults in 'infancy age'. Technology is adaptable, but only within the framework agreed during the design process. The meaningful changes require additional investment, much higher than in the case of human workforce retraining. The reasonability of smart warehouse implementation has to be verified separately in every case, there is no clear indication of a universal answer. The above comparison considers not only the available technologies but also the validity of their implementation. Theoretically, it is possible to automate or robotize all warehouse activities even for non-standard storage units. However, the return of investment period of such a system will be counted in tens of years, and for this reason it is a purely theoretical solution. The share of human and technological factors throughout the scheme on average is equal. Today's state-of-the-art warehouse is not 233 a smart warehouse when about 50% of activities have to be performed by workers. The assumed level is beyond the reach of differentiated supply chains. In the case of warehouses as part of an enterprise isolated from supply chain, when standardization of turnover objects is conducted, implementation opportunities are more likely. Most publications with the phrase "smart warehouse" focus on one smart solution or a limited number of them, which does not allow the warehouse to "reach level 4.0". It is only the set of all or a significant number of solutions, possible and indicated for implementation in a given configuration, that allows a warehouse system to be called "smart". It is therefore difficult to agree with most of the authors of the mentioned publications that they touch upon a complete smart solution. However, it cannot be denied that they focus on the essential elements of a smart warehouse. It should be emphasized that this paper is the first comprehensive approach to determine the degree of possibility of implementation of modern technologies in warehousing in the context of Industry 4.0, so there is no possibility to refer to other results of similar studies. This indicates a possible direction for further and more detailed studies. CONCLUSIONS The table does not give a straight answer, if the values are close enough to 100% to conclude, the set of presented technologies is a real smart warehouse, without clearly defined criteria. Even if we assume a level of 80%, this value will not be reached in many areas, which means that the smart warehouse in the holistic sense is currently unattainable. This also applies to the prospects for the next few years. However, this does not mean that we should not strive for it and make attempts, even in separate areas of warehouse logistics. It had to be underlined that the trend in the direction of dehumanization of main operation is strong. What can be done to decrease the so-called human factor below assumed share values and reach the real smart warehouse-level value close to a minimum of 80% in every row? The first thing is standardization on a global scale. Standardization covers packages, including automatic identification means and communication protocols. Second, the automation and, in the case of road transport, standardization of road transport means. The third is trust, not only in automation effectiveness but in the honesty of cooperatives. These are very highly suspended requirements, it is hard to believe that they will be met in the next 10 to 20 years. The real smart warehouse is still far away, but some fields of storage can be "smart" now, as described, for example, by Žunić et al. [2018] or Bolu et al. [2019]. It is important to emphasize that smart warehouses can arise (and are arising) in isolation from the microenvironment. Little is in place to prevent the creation of smart solutions within the reach of an enterprise, but often in isolation from its suppliers and customers. Despite the availability of a wide range of technological solutions, implementation constraints, including standardization, the need for close cooperation within the supply chain, and the unavoidable transfer of goods between facilities through public areas, with little control of the entrepreneur, are obstacles difficult to overcome by business practice.
2022-07-09T15:34:44.968Z
2022-06-30T00:00:00.000
{ "year": 2022, "sha1": "640b388636adfc0af412fa6d137bc711fa1c06ad", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.17270/j.log.2022.702", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "34932c582a5c56f3653082951b58416ed21372b8", "s2fieldsofstudy": [ "Business", "Engineering" ], "extfieldsofstudy": [] }
118486978
pes2o/s2orc
v3-fos-license
Detection and imaging of microcracking in complex media using coda wave interferometry (CWI) under linear resonance conditions This letter presents the development of ultrasonic coda wave interferometry to locate and image microcracks created within consolidated granular media, polymer concrete. Results show that microcracks can be detected through coda waves when the polymer concrete is submitted to a linear resonance. In addition, the multiple scattering revealed to be dependent on the plane in which the resonance is excited. In that sense, Acoustic emission measurements were performed under flexural resonances generated in (XY) and (XZ) planes to verify the existence of a possible structural anisotropy related to microcracks. Recorded acoustic signatures revealed important differences in the frequency contents, depending on the considered plane, with a consequent wealth on the involved mechanisms during the microcracks vibrations. materials, is not always suitable especially in the case of consolidated granular media. Indeed, conditioning and relaxation effects might appear even in the absence of micro-cracks 14 and disturb the defect characterization procedure. In this work, we present a method to predispose micro-cracks to interact with a high frequency wave when the propagating medium is submitted to a linear vibration. The method allows detecting, locating and imaging defects using an original approach very different form the passive imaging technique, which is generally hard to implement when real time monitoring is necessary. The characterized consolidated granular material, whose dimensions are 160x40x11 mm 3 , consists of an epoxy resin matrix reinforced by sand and aggregates at 40%, 30%, and 30% volume fraction, respectively. This polymer concrete was submitted to a three-point bending fatigue test performed under 3 kN loading force, where the distance between the supporting pins was set at 120 mm. The created microchanges in such a complex medium can be detected by taking advantage of the multiply scattered waves. Structural variations are determined by cross-correlating two waveforms taken before and after the fatigue test or at low and high excitation amplitudes. The time-windowed normalized crosscorrelation function € R(t) is expressed as: is the waveform corresponding to the initial state (or low excitation) and € ˜ u (t) is the waveform obtained at fatigued state (or high excitation). € 2T is the length of the considered time window, which is centered around t ( t >> T ). Consequently, the decorrelation € K(t s ) between € u(t) and € ˜ u (t) can be determined as: To detect and locate the created scatterers (microcracks) within the weakly fatigued polymer concrete, the application of the CWI for different ultrasonic paths did not show any clear time delay between € u(t) and € ˜ u (t) even for increasing excitation amplitudes, where the correlation function produced a sequence of autocorrelations i.e. € K(t s ) ≅ 0. In order to increase the sensitivity of the CWI, we used a shaker controlled by a power amplifier (46 dB) delivering a peak-to-peak excitation up to 120 V (see Fig.1). Vibration modes of the polymer concrete sample generated in a clamped-free configuration are detected using an accelerometer attached to the free side of the sample. Since the aim of these experiments is not to create nonlinear resonances, we performed preliminary fast and slow nonlinear dynamics measurements in order to probe the conditioning of the consolidated granular sample and its subsequent relaxation. For the three first flexural resonances, results show that the material is vibrating in the linear regime as long as the excitation amplitude is kept below 20 mV before amplification. Under the linear vibration conditions, we perform through transmission measurements of ultrasonic pulses generated using identical large band transducers mounted opposite each other, where the emitter is excited with a burst signal at 440 kHz. In order to generate the resonance in the linear regime, the shaker was excited continuously at 10 mVpp while the ultrasonic emitter transducer was excited by a pulse generator adjusted to deliver a 50 mVpp amplified at 46 dB. The experimental setup was calibrated through a set of measurements, where different aspects like positioning of sensors and coupling were taken into account. The numerous measurements showed that for a given position of the transducer, the reproducibility of the delay between the multiply scattered signals does not exceed 50 ns (which correspond to the sampling period of the acquisition system). On the other hand, due to the sensitivity of the coda wave to the environmental conditions (that may change between two measurements), reference signals were recorded at every position when the material is at rest. Evolution of the decorrelation coefficient as a function of the transducers positions along the x-axis is presented in Fig.2. The latter shows that depending on the excited linear resonance, the coda of the ultrasonic signals is affected differently depending on the generated strain at the scatterers. Furthermore, Fig.2 shows that microcracks created during the fatigue test do not seem to propagate along a single line but are diffused between the supporting pins. As a function of the excited resonance, the strain distribution at the microcracks changes with a consequent impact on the coda of the recorded signals observed through the decorrelation coefficient K. This result is in accordance with previous works on the fracture behavior of inhomogeneous materials submitted to a three point bending test. In such materials, microcracks result from the interaction between the gradient of the stress field and the distribution of the fracture stresses, where the stress can be higher than the lower limit of the fracture distribution even at the end of the linear domain 15,16 . In the view of imaging the scattering of the ultrasonic waves, contact transducers were replaced with aircoupled ultrasonic transducers whose frequency bandwidth goes from ~ 300 kHz to ~ 700 kHz. In addition of being contactless, the air-coupled method allows improving the spatial resolution and is much faster than the contact approach. Air-coupled transducers are excited with a sine-Gaussian profile signal emitted with 1 kHz repetition frequency. Received signals are sampled at 15 MHz over a dynamic range of 16 bits and amplified at 60 dB. Due to the important difference existing between the acoustic impedances of the air € (Z air ≅ 416Rayl) and the polymer concrete € (Z PC ≅ 9MRayl), the reflection coefficient of the ultrasonic wave is important € R ≅ 99.98%. However, the high voltage used in the excitation amplitude (~ 200 V) associated to an averaging of the received signals (~ 50) allowed having a good signal-to-noise ratio (SNR), which is around 30 dB for the ballistic wave. The SNR corresponding to the coda of the recorded signals was above 12 dB. Under these conditions, reference signals correspond to the propagation through the polymer concrete before activating the shaker. Received signals are then recorded under weak vibrations, as explained above, and coda waves are analyzed by considering a time window corresponding to ~20 µs (8 periods). Such a procedure allows getting a through transmission imaging of the polymer concrete as shown in Fig. 3. The latter shows that under a linear resonance, the sensitivity of the CWI to micro-cracks is improved and allows an active and simple imaging of the microcracked area using an original approach. When the same air-coupled coda experiments are repeated for resonances excited in the XY plane, decorrelation coefficients revealed to be small ( € K(t s ) < 0.02). Such a weak interaction between the ultrasonic wave and the existing microcracks is mainly due to the direction along which the mechanical force is applied during the bending test. The moderate force load creates micro-cracks within the polymer matrix and/or the interface between aggregates and matrix. The loading conditions make the cracks mainly in the force direction 17 . However, one might expect important cracks kinking angles away from the beam center with a local influence of the aggregates size and distribution 16 . In view of the cracks structural anisotropy, it is expected that mechanisms at the microcracks are changing depending on the considered resonance plane. In order to verify the change in the generated mechanisms, acoustic emission (AE) measurements were performed when micro-cracked samples are submitted to the same flexural resonances in the XY and XZ planes. Signals were collected with 5 MHz sampling rate along 5012 points for each AE hit using broadband piezoelectric sensors, where signals are 40 dB pre-amplified. First, we noticed that the acoustic activity in terms of the generated number of signals is qualitatively not the same. Indeed, the acoustic activity was found to be more important in the XZ plane as can be seen on Fig. 4. In addition, the characteristics of the acoustic emission signals are not the same. This has been verified through the frequency contents of the acoustic emission signals recorded along one resonance cycle in both configurations. Note that the frequency range of the acoustic emission signals is completely different from the ones used to generate the different resonances, which are below 4 kHz. In the same figure we can see that microcracks behave differently depending on the plane in which resonances are excited. Indeed, in view of their orientation, microcracks can be submitted to shear and/or compression forces, which can enhance mechanisms as different as clapping and sliding involving different frequency components. At the time when we think that changing a resonance plane will mainly favor one mechanism over another, we believe that the understanding of the experimental observations need a deeper analysis. Indeed, the presence of memory effects beside the aforementioned activated mechanisms (clapping or sliding) makes the analytical formulation of the relationship between stress and strain not feasible. As an alternative, a multi-state statistical description of the microcracked polymer concrete, based on the generalized Preisach-Mayergoyz (PM) formalism, seems to be a promising approach in order to link the behavior of simple mesoscopic elements (at the microscopic scale to describe the microcracks behavior) to the observations performed at the macroscopic scale 17 . Our future work will be developed in this direction.
2019-04-13T14:02:54.249Z
2016-02-08T00:00:00.000
{ "year": 2016, "sha1": "0920f99e89313ff3eef75e1dc9a60a7da995ffb1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9843845f5a998c2547dd131e4c556d9e54075c5f", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
250366143
pes2o/s2orc
v3-fos-license
Interfacial Layers between Ion and Water Detected by Terahertz Spectroscopy Dynamic fluctuations in hydrogen-bond network of water occur from femto- to nano-second timescale and provides insights into structural/dynamical aspects of water at ion-water interfaces. Employing terahertz spectroscopy assisted with molecular dynamics simulations, we study aqueous chloride solutions of five monovalent cations, namely, Li, Na, K, Rb and Cs. We show that ions modify the behavior of surrounding water molecules and form interfacial layers of water around them with physical properties distinct from that of bulk water. Small cations with high charge densities influence the kinetics of water well beyond the first solvation shell. At terahertz frequencies, we observe an emergence of fast relaxation processes of water with their magnitude following the ionic order Cs>Rb>K>Na>Li, revealing an enhanced population density of weakly coordinated water at ion-water interface. The results shed light on the structure breaking tendency of monovalent cations and provide insights into the properties of ionic solutions at the molecular level. Introduction Understanding the properties of aqueous salt solutions is critical for many branches of fundamental research and technology including the modeling of biological processes, [1][2][3] process design in the chemical industry, 4 drug design in the pharmaceutical industry, 5 electrolyte performance in fuel cells and batteries. 6 A large number of active experimental and theoretical work focuses on the behavior of neutral and charged surface immersed in electrolyte solutions. [7][8][9][10][11] However, many questions remain unanswered and our understanding of the structure and dynamics of water in ionic solutions at molecular level is still far from complete. Specifically, the behavior of water at interfaces of ion-water is complicated. The Coulombic potential of an ion modifies the behavior of surrounding water molecules, and forms interfacial layers of water around them with physical properties distinct from that of bulk water. The structural rearrangements of water molecules in these intermediate layers at ion-water interface modify the macroscopic properties of ionic solutions. [12][13][14][15][16] No consensus prevails so far regarding the length scales of electrostatic effects of ions in solution. A variety of experimental techniques have reported that ions affect primarily the dynamics and structure of the first solvation shell of water molecules which have direct contact with ions, and thus, endorsing local effects of ions into water. This includes optical Kerreffect spectroscopy, 17 terahertz Kerr-effect spectroscopy, 10,18,19 two-dimensional Raman-terahertz spectroscopy, 20 femtosecond time-resolved infrared vibrational spectroscopy, 12,13 terahertz spectroscopy, 14, 21-23 NMR, 24 dielectric spectroscopy, [25][26][27][28][29][30][31] and molecular dynamics simulations, 15, 32-34 that have significantly improved our understanding of aqueous salt solutions. However, long range effects of ions on water structure have also been witnessed. 14,35,36 Dielectric spectroscopy has shown mixed results with evidence that in certain cases the bound water molecules extend beyond the first solvation shell of ions. 21,[26][27][28][29][30]37 Thus, detecting the structural dynamics of water at interfacial layers between ion and water as well as identifying their functions remains a significant challenge. Ions in aqueous solution significantly alter the structure and dynamics of water molecules from their transient tetrahedral configuration. 7,8,[12][13][14] Water molecules in the immediate neighborhood of ions experience a strong local electrostatic field, which significantly reduces the net polarization of water at ion-water interface. A strong charge density of a small ion causes a rearrangement of dipolar water molecules, forming hydration shells, whereas large ions interact weakly with water molecules. Moving ions orient adjacent water molecules in the opposite direction to that they would rotate, 38 further reducing the polarization of water. Thus, ions strongly influence the dynamics of water molecules beyond their hydration shells. Based on their influence beyond their hydration shells, ions have often been classified as structure making and structure breaking, or "kosmotropic" and "chaotropic", respectively, following the Hofmeister series of ions. 7 The dynamics of bulk water span a large range of frequencies from gigahertz to terahertz. Exploration of dynamics of water in this range will help us to understand the ion-water interaction. In this context, it is interesting to look into the interfacial layers (i.e., hydration shells and beyond) of water molecules around monovalent Hofmeister cations. The dielectric response of aqueous salt solutions provides valuable information on the structure as well as dynamics of water and solvent in the solutions. The dielectric spectroscopy in the megahertz to gigahertz frequencies of ionic solutions has earlier been used to access the dynamics of hydrated ion-pairs and hydration shells. 25,26,28,30,37 There are only a few studies that provide detailed insights into the terahertz dielectric response of the aqueous salt solutions. 12,14,21,23 Beyond the hydration shells, valuable information intrinsic to the effect of ions on water structure can be envisaged from fast relaxation processes of water in the aqueous salt solutions at terahertz frequencies. Such studies typically were overlooked because of the strong absorbance of water at these frequencies. Here, we study a series of aqueous salt solutions by means of an extended megahertz to terahertz dielectric spectroscopy assisted with molecular dynamics simulations. Measurements on chloride salts of five monovalent cations, namely, Li + , Na + , K + , Rb + , and Cs + , with increasing size (or decreasing charge density) have been performed (Table 1). Based on the collective results over a series of five monovalent cations, we show that the order of increase in the fast water dielectric strengths observed at the terahertz frequencies is proportional to the decrease in orientational correlation among water molecules. Methods Materials. Dielectric spectroscopy measurements have been performed on aqueous solutions of alkali chlorides with five monovalent cations, namely, LiCl, NaCl, KCl, RbCl, and CsCl at room temperature (25 o C). The salts were obtained from Sigma Aldrich, USA. Dielectric response of four solutions with concentrations of 0.25, 0.50, 1.0, and 2.0 M were measured for each salt. The aqueous solutions were prepared by dissolving the salts into milli-Q water and stirring the solution for a few hours, and followed by equilibration at room temperature. Megahertz to terahertz dielectric spectroscopy. To explore the effects of ions on aqueous solutions, we have employed a frequency-domain terahertz dielectric spectrometer covering a frequency range from 500 MHz to 1.12 THz (0.017 -37.36 cm -1 ). 14,[39][40][41][42][43][44] The spectrometer consists of a vector network analyzer (Agilent, N5225A PNA) and terahertz frequency extenders from Virginia Diodes. Frequency extenders from Virginia Diodes, Inc. (Charlottesville, VA), have been used to generate terahertz waves. Eight different rectangular-waveguide (WR) modules are employed to cover a frequency range of 9 GHz -1.12 THz as described in previous works. 39 To characterize the low frequency dielectric response from 500 MHz to 50 GHz, a dielectric probe (HP 85070E) has been employed. At the terahertz frequencies, a variable-path length sample cell for the dielectric measurements has been used. Two parallel windows are installed inside the sample cell, with once fixed and other in mobile position with submicron (∼80 nm) precision for changing thickness. To control the temperature, the sample cell was cooled down with Peltier coolers (Custom Thermoelectric, 27115L31-03CK), and heated up with high power resistors embedded in the aluminum sample cell. The temperature is controlled with accuracy of ± 0.02 o C, using a Lakeshore 336 temperature controller. Additional information regarding experimental setup can be accessed in earlier reports. 14 Simulator (LAMMPS). 47 The SPC/E water model is used as it provides relaxation times for bulk water compatible to that obtained from the dielectric measurements. Details of the simulations are provided in Results Dielectric spectroscopy. The dynamics of water and ions in aqueous salt solutions are complex in nature, encompassing several inter/intra-molecular relaxations. In the megahertz to gigahertz frequencies, the dynamic processes typically involve the tumbling of ion-pairs, orientational modes of water molecules in solvation shells of ions, and in bulk water. 8,14,25,28,[39][40][41][42][43]48 At terahertz frequencies, the structural rearrangement due to breaking and reforming hydrogen bonds of water molecules as well as the orientation of single-water molecules becomes prominent, originating from the interaction between ions and water molecules in the aqueous solutions. 14,[49][50][51][52] The complex refractive index is conventionally represented as a function of frequency, ν, in the form, material, Figs S1a, S1b, and S1c. The solvation of salt species into water significantly changes the absorption coefficient as well as refractive index. A significant increase in the terahertz absorbance with respect to water has been observed in salt solutions containing cations with large ionic size, such as K + , Rb + , and Cs + , as compared to smaller ones, i.e., Li + and Na + , reflecting the diverse nature of ion effects on aqueous solutions. The real, εʹ(ν), and imaginary, εʹʹ(ν), parts of the complex dielectric response can be calculated from the absorption coefficient and refractive index using the following equation, sol * () = ′ ( ) + ′′ ( ) where /(2 0 ) is the Ohmic loss due to the electrical conductivity, σ, of the salt solution (Fig. S2), and 0 is the permittivity of the vacuum. The real and imaginary parts of the dielectric response are shown in Figs. 1c and 1d, for NaCl and RbCl solutions, respectively. The dielectric spectra for LiCl, KCl and CsCl solutions are presented in supplementary material, Figs. S1d, S1e, and S1f. The interaction between ions and water molecules strongly depends on the charge density of ions. 53 Small monovalent ions with a high charge density strongly interact with surrounding water molecules, whereas such interactions are relatively weak for large ions with a low charge density. To understand relaxational modes of water molecules in ionic solutions, the dielectric response in the megahertz to gigahertz frequency range has extensively been studied. 14,26,28,54 The technique provides crucial information about the dynamics of hydration water. The dynamics of hydration water having direct contacts with ions depends strongly on the charge density of ions, and varies from ~10 to ~55 ps. As an example, the dynamics of hydration water in the 0.25 M LiCl solution of ~55 ps (~2.8 GHz) has been observed (Fig. S3). This value is of the same order as that reported in the literature. 25,55 The dynamics of hydration water around large monovalent ions such as K + , Rb + , Cs + occurs in the gigahertz to terahertz frequencies, and is more or less similar with the orientational dynamics of pure water. 25,27,56 Thus, we focus on the analysis of the dielectric response at the gigahertz to terahertz frequencies. The presence of solutes with charged surfaces changes the gigahertz to terahertz frequency dielectric response of water. 2,3,14,43,57 Heterogeneous dynamics of water has been reported around ions, proteins, among other species. Debye model composed of several Debye components has proven to be useful to address the dynamic heterogeneity in a broad range of aqueous solutions. 29,58 In that view, the dielectric response of aqueous salt solutions in this frequency range can adequately be analyzed using a Debye model containing three individual components in the form, 14,50,51 sol * () = ∞ + S − 1 where D is the relaxation time corresponding to the collective orientational dynamics of bulk water in the solution. The relaxation times, 2 and 3 , are the fast relaxation processes, including the orientation of single-water molecules and the structural rearrangement due to breaking and reforming hydrogen bonds around solvated ions, respectively. The numerators associated with each term, ∆ D = S − 1 , ∆ 2 = 1 − 2 and ∆ 3 = 2 − ∞ , correspond to the dielectric strengths of the corresponding processes. ∞ is the dielectric contribution from all polarization modes at frequencies much higher than the range probed in our dielectric measurements. And S is the static permittivity given by S = ∞ + ∆ D + ∆ 2 + ∆ 3 . Solvation of salts into water significantly alters the hydrogen-bond network in bulk water and can been recognized from the dielectric response of the aqueous salt solutions. As shown in Figs. 1 and S1, the absorption coefficients change significantly as the salt concentration increases. The peak value of the dielectric loss reduces with increasing salt for all solutions. To quantify individual contributions from different relaxation processes, the dielectric response of aqueous salt solutions is analyzed using Eq. 2. An example of such fits of the dielectric response to Eq. 2 is shown in Fig. 2 In order to probe the interaction of water with cations, we estimate the probability distribution of oxygen of water molecules with respect to ions, X + -O, where X represents a cation and O is the oxygen of water (Fig. 4a, for 0.5 M salt solutions). The first peak in cation-water radial distribution functions, RDFX-O(r), indicates the occurrence of the first hydration shell with respect to the X cation. The position and amplitude of the first peaks agree closely with previous results. 56 The position of the peak occurs at a shorter distance for small cations, such as Li + and Na + , indicating that water molecules in the hydration shells are strongly bound to these cations with their charge density confined to a narrow volume. Whereas the first peak in radial distribution functions of bigger cations with their charge distributed over a larger volume, such as K + , Rb + and Cs + , appears at a longer distance with respect to the cations, revealing that hydration water molecules within the first solvation shell of these cations have a relatively weaker interaction. In addition, the first peaks in these functions of bigger cations are broader, indicating that water molecules are rather diffused in the space around the surface of large cations with lower surface charge densities. The interaction between hydrogen atoms of water with anion (Cl -) also has been characterized with the radial distribution function (Fig. 4a, inset). The first peak of the probability distribution of hydrogen atoms of water with respect to Cl -(Cl --H) emerges at a long distance of ~ 3.10 Å. This peak appears at almost the same position of the first peak in cesium-oxygen, Cs + -O, (~ 3.30 Å), and this distance is similar with the distance between water molecules in water-water radial distribution function (~ 3.30 Å), 56 indicating a minor difference between the interactions of water with Cl -, Cs + , and water. and relaxation times ( A , B , and C ). Li + 60 61 d) The dynamics of water molecules in the first solvation shell can be extracted from the orientational correlation function (OCF), C(t), as shown in Fig. 4b. The OCFs show three exponential decay characteristics and can be analyzed in the form, (Table 1). An additional component with a weak contribution to the total spectrum has also been identified with the relaxational time ranging from 140 to 180 ps, which appears to be the tumbling motion of ion-pairs that has previously been observed in monovalent cations. 25,26 The relaxation dynamics of water molecules in the first hydration shell of Li + and Na + is significantly slower than those of Rb + and Cs + (Fig. 4b). Relaxation dynamics of water in the first hydration shell of Li + and Na + show two long exponential decay components with no contribution from bulk water relaxation (Table 1), demonstrating a strong interaction of Li + and Na + with surrounding water molecules. However, in case of K + , Rb + and Cs + , all the three dynamical processes of water molecules are present. Water molecules in the first solvation shell of the large ions having a weak cation-water interaction easily exchange with bulk water, and therefore the contributions of bulk water in the first solvation shell appears in the OCFs. The relaxation times of water molecules in the first solvation shell of ions (i.e., slow water) are 32, 25, 12, 11.5, 10.5 and 8.8 ps for Li + , Na + , K + , Rb + , Cs + , and Cl -, respectively (Table 1). Clearly, the orientational dynamics of water molecules in the first hydration shell of small cations is kinetically more retarded as compared to that of the large ions. The orientational dynamics of water molecules interacting with Clis the least affected one among all the ions and is nearly indistinguishable from that of bulk water ( Table 1). The length-scale of the interaction between an ion and water molecules can be estimated from the OCF, C(t)X-O, of water molecules around ions. The OCFs provide information of the dynamics of water molecules around an ion as well as qualitative length-scales, i.e., how far out into the solution the influence of an ion can reach. To explore the dynamics of water around an ion, the correlation functions are determined between the surface of the ion and successive solvation shells (i.e., local minima in the radial distribution function). The OCFs of water around ions as a function of distance from the ion have been analyzed until 14 Å for Na + , Cl -, Li + , K + , Rb + and Cs + (Figs. 4c, 4d and S6). The length-scale of the ion-water interaction can be estimated from the ion to the position, at which the OCF becomes nearly invariant with increasing water thickness. Based on this criterion, the dynamics of water molecules is influenced until ~ 6.9 Å from Li + . Beyond the distance, the OCF remains unchanged and is mostly governed by bulk water dynamics. For Na + and K + ions, the dynamics of water molecules is affected until about 5.4 Å and 3.7 Å, respectively. For the large cations, Rb + and Cs + , only water molecules in the first solvation shell are somewhat affected. The observation has been confirmed by the gigahertz dielectric response, suggesting that small cations with high charge densities have more tendency to accumulate water molecules around them. The strong local electrical field of ions in the aqueous solution leads to a reduction in the dielectric response of water, which is referred to as "depolarization". 28,48 As a result, the dielectric strength of the main relaxation process monotonically decreases with increasing salt concentration (Fig. 3a). The reduction of the dielectric response has consistently been observed in salt solutions and originates from accumulative effects that include the dilution of water, and the depolarization of water due to the presence of ions. 25,26,29,30,37,48,54 To correct for the dilution of water, we calculate the dielectric response originated from bulk water in the solution. If we assume that all water molecules in the solution participate in the bulk water process, their dielectric strength can be estimated based on the partial specific volume of water in the solution, which is the line Length-scale of cation-water interaction where ( s ) is the electrical conductivity of the aqueous solution (Fig. S2). The dark brown curves (in Figs. 5a, 5b and S7) represent the correction due to the contribution from the kinetic depolarization. The dielectric contribution of the static depolarization then can be extracted and used to estimate the number of slow water molecules per solvated ion, i.e., the "hydration number". The orientational dynamics of these water molecules appears at the lower frequency (Fig. S3). Deconvolution of the dielectric response at the megahertz to gigahertz frequencies reveals an emerging contribution from hydration water molecules around ions (Figs. 5a, 5b and S7). Such estimation has been carried out for a large number of aqueous salt solutions, [25][26][27] using the relationship, Table 1. As per coordination number of Li + (Table 1), about four water molecules are accommodated within the first hydration shell. The hydration water (~ 8.5) extracted from the dielectric spectroscopy exceeds the first hydration shell. Thus, the total hydration water molecules extend to the second hydration shell around Li + cation. It is noteworthy that water molecules outside the hydration shells are also affected by the structural discrepancy between the neat water (i.e., bulk water) and the hydration water. The structural dynamics of these water molecules is modified as a result of ion-water interaction. In case of Na + , hydration water molecules (~ 4.5) can be accommodated well within the first solvation layer, while total water molecules affected by the ion may extend to the second solvation shell. Large monovalent cations (K + , Rb + , and Cs + ) have a similar coordination number, i.e., about 8 (Table 1). However, the hydration water molecules bound to K + is ~2.2, whereas this value is close to zero for Rb + and Cs + . Water molecules around large monovalent cations do not experience a strong electrostatic effect, however, they still are influenced by the electrical charge on the surface of the ions. Thus, the structural aspects of these water molecules are not the same as bulk water. Cs + cations, respectively, with respect to that of bulk water (Fig. 3b, inset). In view of structure breaking tendency of ions, the total dielectric strengths of the fast relaxation modes of water in alkali chloride solutions can be arranged in the order as, Cs + > Rb + > K + > Na + > Li + . The observed trend infers in the first place that cations with a larger size give rise to a higher number of weakly coordinated water molecules in the solution. This behavior conforms well with the classical Hofmeister series, that advocates the structure breaking tendency of these cations to be in the order, Cs + > Rb + > K + > Na + > Li + . 9 Discussion The dynamic response of liquid water extends from gigahertz to terahertz frequencies encompassing several dynamical features that are of inter-and intramolecular origin. In this work, we have analyzed the dielectric response from liquid water at 100 MHz -1.12 THz using a three-Debye model and have identified three distinct dynamics with relaxation times of D = 8.27 ± 0.20 ps, 2 = 1.1 ± 0.3 ps, and 3 = 0.16 ± 0.05 ps. The slowest relaxation process, D , dominates the dielectric response of the liquid water. The process originates from collective orientational dynamics of bulk water that represents the cooperative rearrangement of hydrogen bonds among water molecules, involving in their tetrahedral configuration of the hydrogen bonding network. 14, 50, 51 The next relaxation process at ~1 ps arises from weakly coordinated water molecules, charactering as the reorientation of single-water molecules, 2 . 14, 19, 19 showed less than 10% change in 2 relaxation time with NaI salt solution which is also collinear with an earlier work from Tielrooij et al. 12,77 To explore the origin of the dielectric response at the terahertz frequencies, a number of experiments have been performed on ionic solutions. Using terahertz time-domain spectroscopy combined with MD simulations, Balos et al., 21 have studied a broad range of aqueous salt solutions and have shown a relaxation dynamics occurring at ≈ 0.5 THz, and attributed it to ion's fluctuations in the solvation cage. In a separate report, Schmidt et al, 78 also using terahertz spectroscopy to investigate aqueous solutions of monovalent cations, have shown rattling motion of ions in solvation cage occurring at a slightly higher frequency than the rotational-librational mode of water at ≈ 0.15 ps. Using optical Kerr-effect spectroscopy, Heisler et al., 79 have reported a single exponential relaxation with a relaxation time of 56 ± 8 fs and a damped-harmonic oscillation at high frequencies of 168, 150, 132 cm -1 for NaCl, NaBr and NaI solutions, respectively. The high frequency damped-harmonic oscillation in ionic solutions appeared with a broad and well-defined peak is absent in pure water spectrum, and has been attributed to ion-water interaction. The 56 ± 8 fs single exponential relaxation have been observed in pure water and a more pronounced peak is present in salt solutions. Shalit et al., 20 have investigated the impact of monatomic cations on the relaxation dynamics of hydrogen-bond network in aqueous salt solutions using twodimensional Raman-terahertz spectroscopy. The results show an average relaxation dynamics intrinsic to hydrogen bond network with relaxation time of ≈ 65, 55, 80 fs for pure water, CsCl and NaCl solutions, respectively. These results indicate that this sub-picosecond relaxation mode is intrinsic to water structure and is in a reasonably good agreement with 3 relaxation observed in the present study. The dynamics of water molecules around cations is influenced as a result of cation-water interaction. Water molecules in the hydration shells of small cations reorient relatively slow as compared to the large ones. The dynamics of water molecules directly interacting with small cations slows down with a factor of 5 to 6 times (Table 1) compared with those of bulk water. However, the electrostatic effects of large cations such as Rb + and Cs + are constrained to the first solvation layer only, signifying local effects, with minor slowdown in the orientational dynamics of surrounding water molecules. As can be seen in Table 1, a correlation can be observed between the charge density and orientational relaxation times of water molecules having a direct contact with cations. Beyond the hydration shells, two faster relaxational dynamics observed in all aqueous salt solutions are identical to that observed in pure water. These relaxation modes are attributed to the dynamics of weakly coordinated water molecules that are largely uncorrelated form of the tetrahedral network of bulk water. The estimation of the total number of hydration water molecules and the probability distribution of water molecules around ions, RDFX-O(r), concurrently suggest that the electrostatic potential affects the kinetics of water molecules. The small cations with higher charge densities have greater tendency to accumulate water molecules around them. The first two layers of water molecules are strongly bound to Li + . However, large cations such as K + , Rb + and Cs + are weakly hydrated, and the hydration number is very low and close to zero in case of Cs + ion (Fig. 5). The hydration number decreases from ~ 8.5 to 0 from Li + to Cs + , respectively. Beyond the hydration layers of small ions (e.g, Li + , Na + ) or the surface of large cations such as Rb + and Cs + , the dynamics of water molecules is characterized by the collective reorientation, D , and fast relaxation processes of weakly coordinated water molecules involving in the single-water molecule reorientation, 2 , and structural rearrangement due to breaking and reforming of hydrogen bonds, 3 . The dielectric strengths of the fast relaxation processes increase with salt concentration and follow the ionic order Cs + > Rb + > K + > Na + > Li + . Ions induce an increase of the fast water relaxation processes. The population density of single-water molecules and the structural rearrangement activities increases with salt concentration and follows the aforementioned ionic order. The ordering provides the effects of the ions on the water structure which is related to dynamical fluctuations of the hydrogen-bond network. Fig. 5(e) provides a schematic diagram to elaborate interfacial layers of water around Li and Cs ions along with relaxation modes of slow, fast, and bulk water relaxation processes. The order of the ion effects on the structure of water also follows the mobility order of the monovalent cations, Cs + > Rb + > K + > Na + > Li + (Table 1). 59 In this order, the mobility is high for heavy cations, and low for light ions (Table 1). A possible explanation is that small ions such as Li + and Na + carry a large number of hydration water (i.e., they are strongly hydrated), thus, they move slowly as compared with heavy cations (e.g., Rb + , Cs + ). Our observation of the total dielectric amplitudes of fast relaxation processes beyond the hydration shells shows that large cations such as Rb + and Cs + induce an increase in the total dielectric strengths of the fast relaxation processes, i.e., a larger volume of structure breaker in the intermediate layer. Thus, the diffusion ability of water molecules around large cations is enhanced as compared to that of small cations, explaining the order of mobility of monovalent cations in aqueous solutions. The relation of the effects of ions on the water structure as structure making or breaking properties to the viscosity has earlier been established. 7, 80 Solvation of salts into water either enhances or diminishes the viscosity, 80 which is often explained by Jones-Dole empirical relation, η/η0 = 1 + Acs 1/2 + Bcs + Dcs 2 , with η and η0 being viscosity of aqueous salt solution and bulk water, respectively, cs is molar concentration of salt, A and D depend on interatomic forces, and B defines ion-water interaction. The sign of the coefficient B can be negative or positive depending on the nature of interaction (B > 0 for structure makers and B < 0 structure breakers, Table 1). Such empirical classification has previously been used by Marcus et al.,7,80 and others, 17,20,81 to categorize the effect of ions on water as "kosmotropic" or "chaotropic". The order of cations in our observations for the hydration shells and the dielectric response of fast relaxation processes is analogous to the characterization of ions by Jones-Dole B coefficients. Specially, the hydration number reduces from ~8.5 in case of Li + to 0 in case of Cs + , and in opposite, the total dielectric strength of the fast relaxation processes increases from Li + to Cs + . The bound water molecules interacting with the smaller cations with higher surface charge density, such as Li + and Na + , are orientationally ordered under the influence of local electrostatic field of cations as compared to those weakly interacting with the large ones with lower surface charge density. This makes small cations having the least overall tendency to break intermolecular correlation among water molecules, reflecting in the dielectric strengths of fast water dynamics. One can organize the cations based on their tendency to influence the dynamic fluctuation of the hydrogen-bond network as Cs + > Rb + > K + > Na + > Li + , that conforms well to the Hofmeister series for monovalent cations. Conclusions In conclusion, the presence of ions in aqueous solutions significantly affect various macroscopic properties of the solutions, thus, altering molecular activities in solution. The dielectric spectroscopy has revealed the solvation dynamics in aqueous salt solutions, which is composed of several features in the megahertz to terahertz frequency region. Monovalent cations, namely, Li + , Na + , K + , Rb + , and Cs + are chosen for the study with a common anionic chloride ion. Our study reveals that electrostatic forces of ions influence the dynamics of water in multiple processes, forming interfacial layers composed of the hydration shells and intermediate layer. In the gigahertz frequency region, the dielectric response shows that the effects of ions strongly depend on their charge densities, small cations with high charge density, such as Li + and Na + , strongly influence the dynamics of water molecules, and form hydration shells around them. The large cations with low charge densities have weak electrostatic effects on the water dynamics, thus, the hydration number is low or even zero in case of Cs + . In the terahertz region, the dielectric response of salt solutions is governed by the two distinct fast relaxation modes arising from weakly coordinated water molecules in the intermediate layer at the ion-water interface, including the orientation of single-water molecules and the structural rearrangement due to breaking and reforming hydrogen bonds around solvated ions. The terahertz spectroscopy reveals that this behavior is originated from an increase in the population density of weakly coordinated water, as a result of reduced orientational correlation among water molecules under the influence of cations. The tendency of these monovalent cations to break the intermolecular cooperativity among water molecules is in order, Cs + > Rb + > K + > Na + > Li + . Supplementary material See the supplementary material for details of terahertz spectroscopy of salt solutions, molecular dynamics simulations, analysis of dielectric strength of water in salt solutions, and hydration numbers. Author declarations Conflict of Interest: The authors have no conflicts to disclose. Interfacial Layers between Ion and Water Detected by Terahertz Spectroscopy (S1a) where r is the distance between atoms, ԑ is the LJ depth of potential well, and σ is the distance at which the potential between two interacting particles becomes zero. LJ parameters of ions Li + , Na + , K + , Cs + , and Clare chosen from Ref. 11 and that for Rb + was chosen from the Ref. 12. The parameters used here are shown in Table S1. The geometric mix rules shown in Eq. 5b, c are used for interactions between different types of atoms. Coulombic interactions are included, and particle-particle particle mesh method is used for long- with Nose Hoover algorithm. After equilibration, another 5 ns simulation at 298 K with NVT ensemble using Nose Hoover algorithm is taken for the production run. The orientational autocorrelation function is calculated using, where μ i (t) is the unit vector in the direction of the instantaneous electric dipole associated with a water molecule at the time t, and the summation is over the total N water molecules in the aqueous solution. The ensemble average is computed under the assumption of different states of the system as an initial state at t = 0. Our analyses indicate that orientational autocorrelation functions, ( ), can be fitted to a superposition of three exponential functions, ∑ −( / ) , where τ i is the orientational relaxation time corresponding to i th relaxation mode, and A i being a weighted coefficient for that particular mode. The and i parameters obtained from the fitting are shown in Table 1 (we use A , B , and C for 1 , 2 , and 3 ). Our MD simulations for pure water show that the orientational relaxation time of water molecules is 5.2 ± 0.6 ps 3, 4, 13, 14 , which is in good agreement with other reports of 4.5 ps 15,16 . Fig. 5d. We also employ a phenomenological modification of the Hubbard-Onsager continuum theory proposed by Sega et al.,21 which consider the screening due to the ionic cloud at the mean-field level for the kinetic depolarization, ∆ kd ( s ) = where  is the inverse Debye screening length and is the effective ionic radius. A comparison is shown in supplementary material, Fig. S8. As shown the correction affects hydration numbers more for bigger cations than the smaller ones.
2022-07-09T15:26:22.664Z
2022-07-06T00:00:00.000
{ "year": 2023, "sha1": "eeafbf51d6a0fc68fc38c1908ec8abd2adcd5c39", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "797d7cf5112b6057b6af52889f138323b184d847", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [] }
150211157
pes2o/s2orc
v3-fos-license
Children ’ s Production of Interlanguage in Speaking English As The Foreign Language Article Info ________________ Article History: Recived 2 July 2018 Accepted 24 September 2018 Published 23 December 2018 ________________ INTRODUCTION English is considered as a universal language because it is the most spoken language worldwide.As stated by Safari and Fitriati (2016, p.87) that English becomes a medium in every domain communication, both in local and global context.In Indonesia, English is considered as a foreign language as explained in in Act of Republic of Indonesia No. 20 (2003) Article 37 Verse 1 concerning National Education System.Learning English as a foreign language in Indonesia usually begins at junior high school since the indepence of Indonesia up to the beginning of 2000.In those era, the main objective of learning English were to develop the students" reading ability that was useful for them to read English references when they are in the univeristy or other tertiary (Agustien, 1997, p. 1-2).In line with new era, the main purpose of learning English in the era since the independence of Indonesia up to the beginning of 2000 has not accomplished the needs of some Indonesian societies and the education development.They think that learning English as a foreign language in junior high school is too late.Moreover, Kalisa (2014, p.100) added that learning a foreign language in early years is seen as a milestone to encourage children"s lifelong learning.Therefore, some Indonesian societies take more attention to English in their daily life such as using English as foreign language in the families or sending their children to a school which uses English as a medium of instruction both inside and outside the classroom. For this reason, the immersion education where English is used as a medium instruction was built.When children are immersed in this English-speaking environment, there is a need to use English as a mean of communication.Through the plenty amount of communication, the children tend to use English as their L2 frequently.Furthermore, the children also have more chance to interact naturally with many kinds speaking partners of different age and in different social context.Therefore, this school may fulfill the need of some Indonesian societies that think the importance of speaking English in global era.Bina Bangsa School Semarang is one of immersion education where English is used as students" second language.It is an International School that asks the students to speak English inside and outside the classroom, during and out of school hour.In the process of acquiring English as the second language in that school, the students produce the language that is not identic to those produced by native speakers of the target language (TL), nor exact "translation" from Indonesian as the native language (NL) of the learners.That language system contains elements of both NL and TL that is called interlanguage as stated by Selinker in Mitchell et al. (2012). There are some studies supported the existence of interlanguage in SLA process.Ningrum (2009), Deveci (2010), Harakchiyska (2011), Aziez and Yelfiza (2016) and some researchers who conducted their studies in 2013 such as Sutopo, Khorsidi, Mahardhika, and Resturini investigated interlanguage studies by using oral production as their data.The data can be in the form of daily conversation, speech, interview results, reading aloud, and casual conversation.Some researchers such as Chen (2016), Fauziati and Darussalam (2015), Wedananta (2017), M. Lestari (2016), andMaftuhin andFauziati (2016) used written data for their interlanguage studies.The data can be in the form of students" free compositions, students tasks, and English textbook.Meanwhile, Yusuf (2012) and Sutopo (2014) used the mixture of oral and written production as the data of their interlanguage studies.Some studies also related to interlanguage are the studies with errors as the topic such as studies done by Ratnah (2013), Pandarangga (2014), Ismail and Harono (2016), Sari (2016), Tandikombong, Atmowardoyo, and Weda (2016), Asikin (2017), Nurani (2017), andSukendra (2018).They mostly used written texts as the data for analyzing the errors happened among the students.Error analysis were also used as their frameworks to examine the data. Some studies above used error analysis to analyze the data, especially studies that used written data.Meanwhile, in this study, the researcher did not use error analysis for analyzing students" interlanguage because the interlanguage was described as a part in the process of second language acquisition rather than an error.It develops along students" L2 learning.Considering those reasons, this study tried to find out the interlanguage of four or five years old children in English speaking environment by describing the features, explaining the strategies used by the children to anticipate the influences of native and target language, and clarifying the causes of interlanguage.Hence, this study could provide empirical evidence that IL happens in the process of SLA as a result of learners" effort to speak English as L2.It was also able to gives more information and understanding for immersion education teachers that their students produce interlanguage.Moreover, it helped them to make teaching and learning process more effective and efficient by assisting them with appropriate strategies, media, and activities, so that fossilization will not happen.Finally, this study also could give give the information about interlanguage study that focus on children"s interlanguage in speaking English as a foreign language for other researchers. METHOD The present study was a qualitative case study of SLA in English-speaking environment.The subjects were two non-native teachers and fifteen Kindergarten I Integrity students of Bina Bangsa School Semarang.All of them are native Indonesians who speak Indonesian as their L1.The data were taken by recording their daily conversation at school for about three months and having interview with the class teachers.The daily conversations were recorded inside and outside of the classroom, during teaching and learning time, playing time, and break time.Interview with the class teachers was done after the data conversations gathered.It was used for getting more informations and opinions from others" view in line with interlanguage phenomenon happened among them.The recorded data were transcribed then classified into SLA and interlanguage frameworks by using observational sheets.Moreover, the data were also gained from the class teachers by using question list of free guided interview to get more perceptions from the teachers regarding the interlanguage phenomenon happened among them.After collecting the data, the researcher transcribed the recorded data based on turn (Paltridge, 2000).Then, the interlanguage production were identified and classified based on the features, the strategies, and the cause.Adjemian"s framework (1976) was used to classify the interlanguage features.Selinker"s (1972) andFaerch andKasper"s (1983) frameworks which were strategies of L2 learning and strategies of L2 communication were used in classifying the strategies used by the students in anticipating the native and target language influence.Furthermore, Brown"s (1973) framework of grammatical morpheme in SLA, Selinker"s (1972) theory of five central process of interlanguage, and Ellis"s (1985) theory of negation, interogation, and reflexive pronoun in SLA aimed to explain the cause of interlanguage.Next, the researcher analyzed the data after classifying them based on some frameworks to get the findings.Finally, the explanation of findings and interpretation of data analysis were done by the researcher to answer the research questions. RESULTS AND DISCUSSIONS The results and discussions explain Kindergarten I Integrity students" interlanguage production through its features, strategies, and causes. The Description of Kindergarten I Students' Interlanguage Production The description of interlanguage production that happen among Kindergarten I students of Bina Bangsa School Semarang in speaking English as the foreign language can be seen through the clarification of its features.The clarification was based on Adjemian"s (1976) framework of interlanguage features.The example of the categorization can be seen in Table 1 of the Appendix. It is found that the sistematicity in interlanguage happened because of its spontanity as one characteristic of spoken language.When the students wanted to say something, they just say it as their language to communicate to others without worrying about the mistake.From the interlanguage features tables, it can be seen that the students used verb one sistematically.It supported the previous studies done by Resturini (2013) that the children do not use past verbs when they want to talk about past in their interlanguage.They used verb one form influenced by interference of their native language that is Bahasa Indonesia to their target language that is English as children"s second language.Not only used verb one sistematically, the students also used the word "this" or "this one" and their gestures sistematically when they wanted to say the English of the word that they did not know.Moreover, most students also sistematically produced "no" or "not" as a negative particle in their sentences.Besides, they mostly used declarative word order for their interrogation sentence. From the interlanguage features tables, it also showed that permeability also happened among the students.It was caused by the infiltration of Indonesian as students" L1 and the infiltration of English as their L2.The students also produced the interlanguage dinamically, especially when new knowledge of L2 is added, the language competence of learner will be developed. Finally, from the four interlanguage features proposed by Adjemian (1976), only three of those features were found in the interlanguage production of Kindergarten I Integrity students which were sistematicity, permeability, and dinamicity.Fossilization did not exist since it usually happens in adolescence.The students were the four to five years old children which still had longer period of learning English during their process of acquiring it as second language.For that reason, the students" language competence developed along their efforts in learning the target language and new knowledge they get during the SLA process. The Explanation of Strategy Used by the Students in Anticipating the Influence of Native and Target Language Analyzing the strategies used by the students in anticipating the influences of native and target language were done after researcher analyzed their interlanguage features.The example of table in analyzing the strategies used by the students based on Selinker"s (1972) and Faerch and Kasper"s (1983) frameworks presented in Table 2 of the Appendix. From the tables, it seems that the students used strategies of L2 learning and L2 communication for anticipating the influences of native and target language.Oxford (2002, p.36) refered language learning strategy as specific behaviors or thought processes that students use to enhance their own L2 learning.She classified the strategies into some categories.However, strategies of L2 learning that mostly used by the students were cognitive, compensation, and social strategies.The cognitive strategies used by the students happened firstly through recognizing the English words, then practising them in natural settings eventhough they were not able to apply the formulas and patterns to the correct L2 rules yet.Another strategy of L2 learning used by the students in anticipating the influence of native and target language is compensation strategy in the forms of switching to mother tongue, getting help, using mime or gesture, coining words, and using circumlocation or synonym.Beside cognitive and compensation strategies, the students also used social strategy.These strategies include asking question to get verification, asking for clarification of a confusing point, and asking for help in doing a language task.Selinker (1972) identified the use of communication strategy as one of the processes affecting SLA.In addition, Faerch and Kasper"s (1983) classified the communication strategies usually used in L2 acquisition.Based on this classification, the Kindergarten I Integrity students used strategies of L2 communication by switching to mother tongue, asking help from teachers and peers, using gesture, coining words, paraphrasing the word or synonym and using time gaining strategy.As explained by the teachers in the interview result that when the students could express their intended meaning in English, they automatically switched the words to Indonesian.They added that the students also used gesture in some occasion such as pointing to the objects they meant.It supported the explanation of Morett et all (2010) that when children speak interlanguage, they usually do the gesture to express their meaning.Furthermore, the students also used other words when they could not express their intended words in English.By looking at the findings of strategies used by the students tables, strategies of L2 learning were the strategies that mostly used by the students since they absolutely used strategy in learning new language.Code switching was the second strategy mostly used by the students.It was because the students had better knowledge of their L1, so that they easily switched the language to Indonesian when they did not know the term in English. The Clarification of Interlanguage Causes that Occured Among the Students After analyzing students" interlanguage through the features and strategies, researcher clarified the causes of interlanguage by adapting Brown"s (1973) framework about the acquisition of grammatical morphemes in SLA, Ellis" (1985) framework about the development of negation, interrogation and reflexive pronouns in SLA, and Selinker"s (1972) framework of language transfer and overgeneralisation as process in five psycholinguistic processes of SLA.The example of table that describes the causes of interlanguage can be seen in Table 3 of the Appendix. Findings in tables of interlanguage causes showed that interlanguage mostly caused by language transfer.Then, it was followed by overgeneralization, development of grammatical order of negation, interrogation, and reflexive pronouns.Language transfer that occured among the students was the result of interlingual and intralingual interference.It supported Allen and Corder"s (1974) opinion in Sari (2016) that language transfer happened as a result of interlingual and intralingual interference.Interlingual interference was in the form of mother tongue interference which applied students" L1 rule that is Indonesian when they were speaking English.As supported by the teachers through the interview result that when the students do not know the knowledge of L2 or the English word, they will switch to Indonesian or Javanese automatically.They also added that the students also applied Indonesian rule to speak English.Meanwhile, the intralingual interference occured among the students was in the form of generalization of English rule as their L2 that caused by students" lack of L2 knowledge.The generealization of English rule as L2 included overgeneralization, incomplete rule application and simplification as stated by Fauziati (2017). Besides language transfer and generalization of L2, development of grammatical morpheme has important role in the occurence of interlanguage.As told by Brown (1973) in Owens (1992) that children acquire certain grammatical structures or morphemes before others in first language acquisition and there is a similar natural order in SLA.This natural order of grammatical order also occured among Kindergarten I students.It influenced the students in producing English as their L2.The following is the example of the natural order of grammatical morpheme in pronouns that influence the students interlanguage production. Beatrice : Davin, I want to borrow you.(pointing to red crayon). According to natural order of grammatical morpheme proposed by Brown (1973), the children initially acquire "you" then "yours" in the next stage.Therefore, Beatrice use "you" than "yours" and it was not a mistake or error.It is the process of the students in acquiring English as L2.Other examples of natural order of grammatical morpheme that influenced the students interlanguage production were occured in the form of plural, present progressive, possesive, preposition, irregular past tense, and article development. Furthermore, a student who is still learning English might say for example: "Why you no come?"or "I no lesson".These imperfect sentences indicate the development of students"s negation, interrogation, and reflexive pronoun as proposed by Ellis (1985).He told that children go through a number of key steps before mastering a structure.These kind of developments also happened among the Kindergarten I Integrity students.Some of them used "no" as external negation such as in the sentence "No eating".Then the negation developed to internal negation by using "no" and "not" as the negative particle such as in the sentence "I no can swim".Some students also had negation development with the attachment of modal verb as in "Ms, I can"t open this".Moreover, their negation development also reached to target language rule eventhough the used the rule inappropriately as in "He don"t know, Ms." Besides having the development of negation, the students also have progress in their interrogation stage.Initially, some of them had yes/no questions that sought confirmation or nonconfirmation as in "You ever go to Singapore?" by adding rising intonation to the end of the sentence or by adding auxiliary verb in the front of the subject as in "Do you like it?".The use of questions with wh-word with the ommision of auxiliary verb also happened among the students as in "Why you push the buton?".The students also had development of wh-questions with the inversion of to be and auxiliary verb as in "Are you a boy?" or "Where is my friends?".The development of interrogation that occurs in embedded questions with a subject-verb inversion did not exist in students" process of SLA.Their development might reach the third stage that is the use of whquestions with the inversion of to be and auxiliary verb. Lastly, Dulay, Burt, and Krashen (1977) explained that reflexive pronoun development occurs in the process of acquiring L2.This development also happened among the Kindergarten I students.The following are the examples of the development. Beatrice : Davin, I want to borrow you.(pointing to red crayon). According to sequences in acquiring reflexive pronouns proposed by Brown (1973), the children initially acquire "you" then "yours" in the next stage.Therefore, Beatrice use "you" than "yours" and it was not a mistake or error. CONCLUSION It concludes that the students of Kindergarten I Integrity produced interlanguage sistematically, permeably, and dinamically through their daily conversations with the teachers and peers.Fossilization did not exist there because the students were in the process of acquiring L2 where their language competence developed along their efforts in learning the target language and new knowledge they get. The students used strategies of L2 learning and L2 communication as proposed by Selinker (1972) in anticipating the influences of native and target language.Strategies of L2 learning occured through cognitive, compensation, and social strategies.Meanwhile, strategies of L2 communication appeared by switching to mother tongue, asking help from teachers and peers, using gesture, coining words, paraphrasing the word or synonym and using time gaining strategy.Strategies of L2 learning were the strategies that mostly used by the students since they must use strategy in learning new language.Code switching was the second strategy mostly used by the students.It was because the students had better knowledge of their L1, so that they easily switched the language to Indonesian when they did not know the term in English. Interlanguage that occured among the students was caused by some reasons such as language transfer, overgeneralization, development of grammatical order, and development of negation, interrogation, and reflexive pronouns.Language transfer was the cause that mostly happened.It occured in the form of interlingual and intralingual interference.Those were because of the good mastery of L1 that was Indonesian and the lack of English as L2 knowledge.Overgeneralization had a second place for a factor that caused the interlanguage among the students since it was part of language transfer process.
2019-05-12T14:24:10.424Z
2018-09-24T00:00:00.000
{ "year": 2018, "sha1": "f7c0df8d72d9bad9ba9ea3763ea9403679fce6cc", "oa_license": "CCBY", "oa_url": "https://journal.unnes.ac.id/sju/index.php/eej/article/download/25697/11565", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f7c0df8d72d9bad9ba9ea3763ea9403679fce6cc", "s2fieldsofstudy": [ "Linguistics", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
119166756
pes2o/s2orc
v3-fos-license
Defensive alliances in graphs: a survey A set $S$ of vertices of a graph $G$ is a defensive $k$-alliance in $G$ if every vertex of $S$ has at least $k$ more neighbors inside of $S$ than outside. This is primarily an expository article surveying the principal known results on defensive alliances in graph. Its seven sections are: Introduction, Computational complexity and realizability, Defensive $k$-alliance number, Boundary defensive $k$-alliances, Defensive alliances in Cartesian product graphs, Partitioning a graph into defensive $k$-alliances, and Defensive $k$-alliance free sets. Introduction Alliances occur in a natural way in real life. General speaking, an alliance can be understood as a collection of elements sharing similar objectives or having similar properties among all elements of the collection. In this sense, there exist alliances like the following ones: a group of people united by a common friendship, or perhaps by a common goal; a group of plants belonging to the same botanical family; a group of companies sharing the same economic interest; a group of Twitter users following or being followed among themselves; a group of Facebook users sharing a common activity. Alliances in graphs were described first by Kristiansen et al. in [26], where alliances were classified into defensive, offensive or powerful. Defensive alliances in graphs were defined as a set of vertices of the graph such that every vertex of the alliance has at most one neighbor outside of the alliance than inside of the alliance. After this seminal paper, the issue has been studied intensively. Remarkable examples are the articles [32,35], where the authors generalized the concept of defensive alliance to defensive k-alliance as a set S of vertices of a graph G with the property that every vertex in S has at least k more neighbors in S than it has outside of S. Throughout this survey G = (V, E) represents a undirected finite graph without loops and multiple edges with set of vertices V and set of edges E. The order of G is |V | = n(G) and the size |E| = m(G) (If there is no ambiguity we will use only n and m). We denote two adjacent vertices u, v ∈ V by u ∼ v and in this case we say that uv is an edge of G or uv ∈ E. For a nonempty set X ⊆ V , and a vertex v ∈ V , N X (v) denotes the set of neighbors that v has in X: N X (v) := {u ∈ X : u ∼ v} and the degree of v in X is denoted by δ X (v) = |N X (v)|. In the case X = V we will use only N(v), which is also called the open neighborhood of a vertex v ∈ V , and δ(v) to denote the degree of v in G. The close neighborhood of a vertex v ∈ V is N[v] = N(v)∪{v}. The minimum and maximum degree of G are denoted by δ and ∆, respectively. Given k ∈ {−∆, . . . , ∆}, a nonempty set S ⊆ V is a defensive k-alliance in G = (V, E) if Notice that equation (1) is equivalent to The minimum cardinality of a defensive k-alliance in G is the defensive k-alliance number and it is denoted by a k (G). The case k = −1 corresponds to the standard defensive alliances defined in [26]. A set S ⊆ V is a dominating set in G if for every vertex v ∈ S, δ S (v) > 0 (every vertex in S is adjacent to at least one vertex in S). The domination number of G, denoted by γ(G), is the minimum cardinality of a dominating set in G [20]. A defensive k-alliance S is called global if it forms a dominating set. The minimum cardinality of a global defensive k-alliance in G is the global defensive k-alliance number and it is denoted by γ d k (G). As a particular case of defensive alliance, in [41] was defined and studied the limit case of equation (1). In this sense, they defined a set S ⊂ V as a boundary defensive k-alliance in G, k ∈ {−∆, . . . , ∆}, if δ S (v) = δ S (v) + k, ∀v ∈ S. A boundary defensive k-alliance in G is called global if it forms a dominating set in G. Notice that equation (3) is equivalent to Note that there are graphs which does not contain any boundary defensive k-alliance for some values of k. For instance, the hypercube graph Q 3 has no boundary defensive 0-alliances. Defensive alliances have been studied in different ways. The first results about defensive alliances were presented in [17,26] and after that several results have been appearing in the literature, like those in [1,3,4,6,8,12,13,14,18,19,21,22,27,28,29,30,36,37,38]. The complexity of computing minimum cardinality of defensive k-alliances in graphs was studied in [5,15,23,25,37], where it was proved that this is an NP-complete problem. A spectral study of alliances in graphs was presented in [27,30], where the authors obtained some bounds for the defensive alliance number in terms of the algebraic connectivity, the Laplacian spectral radius and the spectral radius 1 of the graph. The global defensive alliances in trees and planar graphs were studied in [3,18] and [28], respectively. The defensive alliances in regular graphs and circulant graphs were studied in [1]. Moreover, the alliances in complement graphs, line graphs and weighted graphs were studied in [37], [30,38] and [24], respectively. Some relations between the independence number and the defensive alliances number of a graph were obtained in [8,14]. Also, the partitions of a graph into defensive k-alliances were investigated in [12,13,21,40]. Next we survey the principal known results about defensive alliances. Computational complexity and realizability The complexity of computing the minimum cardinality of a defensive k-alliance was studied in [6,23,25,37]. Consider the following decision problem (for any fixed k). DEFENSIVE k-ALLIANCE PROBLEM INSTANCE: A graph G = (V, E) and a positive integer ℓ < |V |. PROBLEM: Does G have a k-defensive alliance of size at most ℓ? GLOBAL DEFENSIVE k-ALLIANCE PROBLEM INSTANCE: A graph G = (V, E) and a positive integer ℓ < |V |. PROBLEM: Does G have a global defensive alliance of size at most ℓ? Up to our knowledge, a general solution for this problem is still unknown and, as we can see below, for k = −1 the problem is NP-complete. As shown in [23], GLOBAL DEFENSIVE (−1)-ALLIANCE PROBLEM is NP-complete, even when restricted to chordal graphs or bipartite graphs. No we consider some realizability results. Since every global (−1)-alliance is also a dominating set, we know that γ d −1 (G) ≥ γ(G) for any graph G. Every global (−1)-alliance is also a defensive alliance, so γ d −1 (G) ≥ a −1 (G). In fact, as was shown in [4], any three positive integers satisfying these inequalities are achievable as the (−1)-alliance number, the domination number, and the global (−1)-alliance number of some graph G. Based simply on the definitions, the domination number, global (−1)-alliance number, and global 0-alliance number must satisfy γ(G) ≤ γ d −1 (G) ≤ γ d 0 (G) for any graph G. The following question was studied in [4]: Given any three positive integers a ≤ b ≤ c, is there a graph G so that γ(G) = a, γ d −1 (G) = b and γ d 0 (G) = c? The next result concerns not only the minimum cardinality of a defensive (−1)-alliance, defensive 0-alliance or a global defensive (−1)-alliance of a graph but also the subgraphs induced by these alliances. Theorem 5. [4] Given 1 ≤ a ≤ b and any two connected graphs H 1 and H 2 with orders a and b respectively, there exists a connected graph G with the following properties. • H 1 is isomorphic to the subgraph induced by the only defensive alliance of G that has minimum cardinality a −1 (G). • H 2 is isomorphic to the subgraph induced by the only strong defensive alliance of G that has minimum cardinality a 0 (G). As the following result states, any connected graph is the subgraph induced by the unique minimum global (−1)-alliance (0-alliance) of some graph. Theorem 7. [4] Given a connected graph H, there exists a connected graph G for which H is the subgraph induced by the unique global defensive (−1)-alliance (respectively, 0-alliance) of G with minimum cardinality γ d −1 (G) (respectively, γ d 0 (G)). Defensive k-alliance number According to the definitions, the domination number, global k-alliance number and alliance number must satisfy for any graph G. Now we present some results related to the monotony of a k (G) and γ d k (G). The following two results are obtained directly from Theorem 8. Corollary 9. [30] Let G be a graph of minimum degree δ and maximum degree ∆ and let t ∈ Z. Theorem 12. [17,26] For any graph G of order n and minimum degree δ, and also After that, some generalizations of the above results for the case of defensive k-alliances were presented in [30]. Theorem 13. [30] Let G be a graph of order n, maximum degree ∆ and minimum degree δ. The global defensive k-alliance number has been also bounded using some basic parameters of the graphs like minimum and maximum degrees, size, etc. For instance, it was shown in [19] that for any graph G of order n and minimum degree δ, The next result generalizes the previous bounds to the case of global defensive k-alliances. Theorem 14. [29] Let G be a graph of order n, maximum degree ∆ and minimum degree δ. For any k ∈ {−∆, ..., ∆}, The upper bound is attained, for instance, for the complete graph G = K n for every k ∈ {1 − n, . . . , n − 1}. The lower bound is attained, for instance, for the 3-cube graph G = Q 3 , in the following cases: . It was shown in [19] that for any bipartite graph G of order n and maximum degree ∆, The generalization of these bounds to the case of global defensive k-alliances is shown in the following theorem. Theorem 15. [29] For any graph G of order n and maximum degree ∆ and for any k ∈ {−∆, ..., ∆}, The above bound is tight. For instance, for the Petersen graph the bound is attained for Defensive k-alliance number of some particular graphs classes We begin this section with a resume of the values for the (global) defensive alliance number of some basic families of graphs. These results have been obtained in [19,26]. Defensive alliances in regular graphs and circulant graphs were studied in [1]. In order to present some results from [1] it is necessary to introduce some notation. Given a graph G = (V, E) and a subset S ⊂ V , the subgraph induced by S will be denoted by S . The (k, δ)-induced alliance is the set of graphs H of order t, minimum degree δ H ≥ δ 2 , and maximum degree ∆ H ≤ δ, with no proper subgraph of minimum degree greater than δ 2 . This set is denoted by S (t,δ) . Moreover, the authors of that article characterized the circulant graphs G such that a d −1 (G) ∈ {4, 5, 6, 7}. An other class of graphs in which have been studied its defensive alliances is the case of planar graphs. For instance, [28] was dedicated to study defensive alliances in planar graphs, where are some results like the following one. Theorem 18. [28] Let G be a planar graph of order n. (ii) If n > 6 and G is a triangle-free graph, then (iv) If n > 4 and G is a triangle-free graph, then The following result concerns the particular case of trees. Theorem 19. [29] For any tree T of order n, γ a k (T ) ≥ The above bound is attained for k ∈ {−4, −3, −2, 0, 1} in the case of G = K 1,4 . As a particular case of above theorem we can derive the following lower bounds obtained in [19]. Theorem 20. [19] If T is a tree of order n, then and these bounds are sharp. Similar results were obtained in [3] by using also the leaves and support vertices of the tree, where the authors also characterized the families of graphs achieving equality in the bounds. Theorem 21. [3] Let T be a tree of order n ≥ 2 with l leaves and s support vertices. Then A t-ary tree is a rooted tree where each node has at most t children. A complete t-ary tree is a t-ary tree in which all the leaves have the same depth and all the nodes except the leaves have t children. We let T t,d be the complete t-ary tree with depth/height d. With the above notation we present the following results obtained in [18]. Theorem 22. [18] Let n be the order of T 2,d . Then for any d, Theorem 23. [18] Let d be an integer greater than three, An efficient algorithm to determine the global defensive alliance numbers of trees was proposed in [7], where the authors gave formulas to compute the global defensive alliance numbers of complete r-ary trees for r = 2, 3, 4. Since Theorems 22 and 23 provide formulas for r = 2, 3, here we include the formula for r = 4. Consider the family ξ of trees T , where T is a star of odd order or T is the tree obtained from K 1,2t 1 , K 1,2t 2 , ...,K 1,2ts , and tP 4 (the disjoint union of t copies of P 4 ) by adding s + t − 1 edges between leaves of these stars and paths in such a way that the center of each star K 1,2t i is adjacent to at least 1 + t i leaves in T and each leaf of every copy of P 4 is incident to at least one new edge, where t ≥ 0, s ≥ 2 and t i ≥ 2 for i = 1, 2, ..., s. Note that each support vertex of each tree in ξ must be adjacent with at least 3 leaves. Theorem 26. [9] Let T be a tree of order n ≥ 3 with s support vertices. Then with equality if and only if T ∈ ξ. Relations between the (global) defensive k-alliance number and other invariants It is well-known that the algebraic connectivity of a graph is probably the most important information contained in the Laplacian spectrum. This eigenvalue is related to several important graph invariants and it imposes reasonably good bounds on the values of several parameters of graphs which are very hard to compute. Now we present a result about defensive alliances, obtained in [30]. Theorem 27. [30] For any connected graph G and for every k ∈ {−δ, . . . , ∆}, The cases k = −1 and k = 0 in the above theorem were studied previously in [27]. Other relations between defensive alliances and the eigenvalues of a graph appeared in [36], in this case related to the spectral radius. Theorem 28. [36] For every graph G of order n and spectral radius λ, The particular cases of the above theorem k = −1 and k = 0 were studied previously in [27]. Some relationships between the independence number (independent domination number) and the global defensive alliance number of a graph were investigated in [8,14]. For instance, the following results were obtained there. In order to present some results from [14] we introduce some notation defined in the mentioned article. F 1 is the family of graphs obtained from a clique S isomorphic to K t by attaching t = δ S (u)+1 leaves at each vertex u ∈ S. F 2 is the family of bipartite graphs obtained from a balanced complete bipartite graph S isomorphic to K t,t by attaching t + 1 leaves at each vertex u ∈ S. F 3 is the family of trees obtained from a tree S by attaching a set L u of δ S (u) + 1 leaves at each vertex u ∈ S. Similarly to the above result, some relationships between the independent domination number and the global defensive 0-alliance number of a graph were obtained in [14]. Complement graph and line graph As special cases of graphs in which their defensive alliances have been investigated, we have the complement graph and the line graph. Theorem 32. [37] If G is a graph of order n with maximum degree ∆, then Theorem 33. [37] Let G be a graph of order n such that γ(G) > 3 and k ∈ {−δ, ..., 0}. If the minimum defensive k-alliance in G is not global, then Hereafter, we denote by L(G) the line graph of a simple graph G. Some of the next results are a generalization, to defensive k-alliances, of the previous ones obtained in [38] on defensive (-1)-alliances and defensive 0-alliances. As a consequence of the above results, the following interesting result was obtained in [30]. Corollary 37. [30] For any (δ 1 , δ 2 )-semiregular bipartite graph G, δ 1 > δ 2 , and for every k We should point out that from the results shown in the other sections of this article on a k (G), we can derive some new results on a k (L(G)). Boundary defensive k-alliances Several basic properties of boundary defensive alliances were presented in [41]. Remark 38. [41] Let G be a simple graph and let k ∈ {−∆, . . . , ∆}. If for every v ∈ V , δ(v) − k is an odd number, then G does not contain any boundary defensive k-alliance. Remark 39. [41] If S is a defensive k-alliance in G andS is a global offensive (−k)-alliance in G, then S is a boundary defensive k-alliance in G. Notice that if S is a boundary defensive k-alliance in a graph G, then a d k (G) ≤ |S|. So, lower bounds for defensive k-alliance number are also lower bounds for the cardinality of any boundary defensive k-alliance. Moreover, upper bounds for the cardinality of any boundary defensive kalliance are upper bounds for the defensive k-alliance number. For instance, the lower bound shown in Theorem 13 leads to a lower bound for the cardinality of any boundary defensive kalliance. In the next result we state an upper bound for the cardinality of any boundary defensive k-alliance, which is the same obtained in Theorem 13 for the defensive k-alliance number. Remark 41. [41] If S is a boundary defensive k-alliance in a graph G, then As the following corollary shows, the above bounds are tight. Corollary 42. [41] The cardinality of every boundary defensive k-alliance S in the complete graph of order n is |S| = n+k+1 As a consequence of the above corollary it is concluded that the complete graph G = K n has boundary defensive k-alliances if and only if n + k + 1 is even. The boundary defensive alliances were also related with the (Laplacian) spectrum of the graph as we can see below. The following theorems show the relationship between the algebraic connectivity (and the Laplacian spectral radius) of a graph and the cardinality of its boundary defensive k-alliances. Theorem 43. [41] Let G be a connected graph. If S is a boundary defensive k-alliance in G, then If G = K n , then µ = µ * = n and ∆ = δ = n − 1. Therefore, the above theorem leads to the same result as Corollary 42. Theorem 44. [41] Let G be a connected graph. If S is a boundary defensive k-alliance in G, then Notice that in the case of the complete graph G = K n , the above theorem leads to Corollary 42. Boundary defensive k-alliances were also studied for the case of planar subgraphs. The Euler formula states that for a connected planar graph of order n, size m and f faces, n − m + f = 2. As a direct consequence of Theorem 40 and the Euler formula it is obtained the following result. Corollary 45. [41] Let G = (V, E) be a graph and let S ⊂ V . Let c be the number of edges of G with one endpoint in S and the other endpoint outside of S. If S is a boundary defensive k-alliance in G such that S is planar connected with f faces, then Theorem 46. [41] Let G be a graph and let S be a boundary defensive k-alliance in G such that S is planar connected with f faces; then The above bound is tight. For instance, the bound is attained for the complete graph G = K 5 where any set of cardinality four forms a boundary defensive 2-alliance and S ∼ = K 4 is planar with f = 4 faces. Theorem 47. [41] Let G be a graph and let S be a boundary defensive k-alliance in G such that S is planar connected with f > 2 faces. By Corollary 45 the above bounds are tight. Defensive alliances in Cartesian product graphs We recall that the Cartesian product of two graphs G = (V 1 , E 1 ) and H = ( The study of defensive alliances in Cartesian product graphs was initiated in [26], where the authors obtained the following result. Theorem 48. [26] For any Cartesian product graph G H, Let the graphs G = (V 1 , E 1 ) and H = (V 2 , E 2 ) and let S ⊂ V 1 ×V 2 be a set of vertices of G H. Let P G i (S) the projection of the set S over G i . Then for every u ∈ P G (S) and every v ∈ P H (S), it is defined X u = {(x, v) ∈ S : x = u} and Y v = {(u, y) ∈ S : y = v}. Theorem 49. [40] If S ⊂ V 1 ×V 2 is a defensive k-alliance in G H, then for every u ∈ P G (S) and for every v ∈ P H (S), P H (X u ) and P G (Y v ) are a defensive (k − ∆ 1 )-alliance in H and a defensive (k − ∆ 2 )-alliance in G, where ∆ 1 and ∆ 2 are the maximum degrees of G and H, respectively. Also, as the union of defensive k-alliances in a graph is a defensive k-alliance in the graph, it is obtained the following consequence of the above result. Corollary 50. [40] Let the graphs G = (V 1 , E 1 ) and H = (V 2 , E 2 ) of maximum degree ∆ 1 and ∆ 2 , respectively. If S ⊂ V 1 × V 2 is a defensive k-alliance in G H, then the projections P G (S) and P H (S) of S over the graphs G and H are a defensive (k − ∆ 2 )-alliance and a defensive (k − ∆ 1 )-alliance in G and H, respectively. Theorem 52. [40] For any graph G and H, if S 1 is a defensive k 1 -alliance in G and S 2 is a defensive k 2 -alliance in H, then S 1 × S 2 is a defensive (k 1 + k 2 )-alliance in G H and The bound of the above theorem is a general case of the results obtained in Theorem 48. Another interesting consequence of Theorem 52 is the following. Corollary 53. [40] Let G and H be two graphs of order n 1 and n 2 and maximum degree ∆ 1 and ∆ 2 , respectively. Let s ∈ Z such that max{∆ 1 , As a consequence of Theorem 52 it is obtained the following relationship between global defensive alliances in Cartesian product graphs and global defensive alliances in its factors. Partitioning a graph into defensive k-alliances Other point of interest in investigating defensive alliances is related to graph partitions in which each set is formed by a defensive alliance. The partitions of a graph into defensive (−1)-alliances were studied in [12,13]. In these articles the concept of (global) defensive alliance partition number, (ψ gd −1 (G)) ψ d −1 (G), was defined as the maximum number of sets in a partition of a graph such that every set of the partition is a (global) defensive (−1)-alliance. Theorem 55. [13] Let G be a connected graph of order n ≥ 3. Then Theorem 56. [13] Let G be a graph with minimum degree δ. Then Moreover, the partitions of trees and grid graphs into (global) defensive (−1)-alliances, were studied in [12] and [21], respectively. Theorem 57. [12] Let G be a connected graph with minimum degree δ. Then As a consequence of the above result, the following interesting result was obtained in [12]. Moreover, some families of trees satisfying that ψ gd −1 (T ) = 1 or ψ gd −1 (T ) = 2 were characterized in [12]. The following results for the class of grid graphs P r P c are known from [21]. Extreme cases are ψ d −∆ (G) = n, where each set composed of one vertex is a defensive (−∆)alliance, and ψ d δ (G) = 1 for the case of a connected δ-regular graph where the whole vertex set of G is the only defensive δ-alliance. Hereafter we will say that (Π gd r (G)) Π d r (G) is a partition of G into r (global) defensive k-alliances. The following family of graphs was considered in [40] to analyze the tightness of several of its results. Example 61. [40] Let k and r be integers such that r > 1 and r + k > 0 and let H be a family of graphs whose vertex set is .., V r } is a partition of the graphs belonging to H into r global defensive k-alliances. A particular family of graphs included in H is K r+k K r . Hereafter, H will denote the family of graphs defined in the above example. From Theorem 13 is obtained that By Theorem 65 and equation (6) are obtained the following two necessary conditions for the existence of a partition of a graph into r global defensive k-alliances. Partitioning a graph into boundary defensive k-alliances The above bounds are tight. For instance, if n is even, each pair of vertices of K n forms a boundary defensive (3 − n)-alliance. Thus, K n can be partitioned into n 2 of these alliances. As a consequence of Theorem 43 the following result is obtained . Corollary 68. [40] If G can be partitioned into r boundary defensive k-alliances, then The above bounds are tight. By Corollary 68 it is concluded, for instance, that if the Petersen graph can be partitioned into r boundary defensive k-alliances, then k = 1 and r = 2 (in this case ∆ = δ = 3, µ = 2 and µ * = 5). Theorem 69. [40] Let G = (V, E) be a graph and let M ⊂ E be a cut set partitioning V into two boundary defensive k-alliances S and S, where k = ∆ and k = δ. Then Corollary 70. [40] Let G = (V, E) be a δ-regular graph and let M ⊂ E be a cut set partitioning V into two boundary defensive k-alliances S and S. Then |S| = n 2 and |M| = n(δ−k) Theorem 71. [40] If {X, Y } is a partition of V into two boundary defensive k-alliances in G = (V, E), then, without loss of generality, By Corollary 70 and Theorem 71 it is obtained the following interesting consequence. Theorem 72. [40] Let G = (V, E) be a δ-regular graph. If G is partitionable into two boundary defensive k-alliances, then the algebraic connectivity of G is µ = δ − k (an even number). By the above necessary condition of existence of a partition of V into two boundary defensive k-alliances it follows that, for instance, the icosahedron cannot be partitioned into two boundary defensive k-alliances, because its algebraic connectivity is µ = 5 − √ 5 ∈ Z. Moreover, the Petersen graph can only be partitioned into two boundary defensive k-alliances for the case of k = 1, because δ = 3 and µ = 2. Partitioning G H into defensive k-alliances In this subsection some relationships between ψ d k 1 +k 2 (G H) and ψ d k i (G i ), i ∈ {1, 2}, are presented. From Theorem 52 it follows that if G contains a defensive k 1 -alliance and H contains a defensive k 2 -alliance, then G H contains a defensive (k 1 + k 2 )-alliance. Therefore, the following result is obtained. Theorem 73. [40] For any graphs G and H, if there exists a partition of G i into defensive k i -alliances, i ∈ {1, 2}, then there exists a partition of G H into defensive (k 1 + k 2 )-alliances and In the particular case of the Petersen graph, P , and the 3-cube graph, Q 3 , it follows ψ d Corollary 74. [40] Let G i be a graph of order n i and maximum degree ∆ i , i ∈ {1, 2}. Let s ∈ Z such that max{∆ 1 , ∆ 2 } ≤ s ≤ ∆ 1 + ∆ 2 + k. Then As example of equality we take G = P , H = Q 3 , k = 1 and s = 3. In such a case, . At next we presented some results about global defensive k-alliances. Theorem 75. [40] Let Π gd r i (G i ) be a partition of a graph G i , of order n i , into r i ≥ 1 global defensive {|X|}. Then, Corollary 76. [40] If G i is a graph of order n i such that ψ gd . For the graph C 4 Q 3 , by taking k 1 = 0 and k 2 = 1, equalities in Theorem 75 and Corollary 76 are obtained. 7 Defensive k-alliance free sets A set Y ⊆ V is a defensive k-alliance cover, k-dac, if for all defensive k-alliance S, S ∩ Y = ∅, i.e., Y contains at least one vertex from each defensive k-alliance of G. A k-dac set Y is minimal if no proper subset of Y is a defensive k-alliance cover set. A minimum k-dac set is a minimal cover set of smallest cardinality. Also, a set X ⊆ V is defensive k-alliance free set, k-daf, if for all defensive k-alliance S, S \ X = ∅, i.e., X does not contain any defensive k-alliance as a subset. A k-daf set X is maximal if it is not a proper subset of any defensive k-alliance free set. A maximum k-daf set is a maximal free set of biggest cardinality. Hereafter, if there is no restriction on the values of k, we assume that k ∈ {−∆, ..., ∆}. Theorem 77. [32,33] (i) X is a defensive k-alliance cover set if and only if X is defensive k-alliance free set. (ii) If X is a minimal k-dac set then, for all v ∈ X, there exists a defensive k-alliance S v for which S v ∩ X = {v}. (iii) If X is a maximal k-daf set, then, for all v ∈ X, there exists S v ⊆ X such that S v ∪ {v} is a defensive k-alliance. Associated with the characteristic sets defined above we have the following invariants: φ k (G): cardinality of a maximum k-daf set in G. ζ k (G): cardinality of a minimum k-dac set in G. The following corollary is a direct consequence of Theorem 77 (i). Our next result leads to a property related to the monotony of φ k (G). Theorem 83. For any connected graph G and −∆ ≤ k ≤ ∆, where µ denotes the algebraic connectivity of G. The above bound is sharp as we can check, for instance, for the complete graph G = K n . As the algebraic connectivity of K n is µ = n, the above theorem gives the exact value of φ k (K n ) = n+k−1 2 . Theorem 84. For any connected graph G and −∆ ≤ k ≤ ∆, where µ * denotes the Laplacian spectral radius of G. We emphasize that Corollary 88 and Proposition 85 lead to infinite families of graphs whose Cartesian product satisfies φ d k (G 1 G 2 ) = n 1 n 2 . For instance, if G 1 is a tree of order n 1 and maximum degree ∆ 1 ≥ 2, G 2 is a graph of order n 2 and maximum degree ∆ 2 , and k ∈ {2 + ∆ 2 , ..., ∆ 1 + ∆ 2 }, we have φ d k (G 1 G 2 ) = φ d k−∆ 2 (G 1 )n 2 = n 1 n 2 . In particular, if G 2 is a cycle graph, then φ d 4 (G 1 G 2 ) = n 1 n 2 . Another example of equality in Corollary 88 (ii) is obtained, for instance, taking the Cartesian product of the star graph S t of order t + 1 and the path graph P r of order r. In that case, for G 1 = S t we have δ 1 = 1, n 1 = t + 1 and φ d 0 (G 1 ) = t, and, for G 2 = P r we have δ 2 = 1, n 2 = r and φ d 1 (G 2 ) = r − 1. Therefore, φ d 0 (G 1 )φ d 1 (G 2 ) + min{n 1 − φ d 0 (G 1 ), n 2 − φ d 1 (G 2 )} = t(r − 1) + 1. On the other hand, it is not difficult to check that, if we take all leaves belonging to the copies of S t corresponding to the first r − 1 vertices of G 2 and we add the vertex of degree t belonging to the last copy of S t , we obtain a maximum defensive 0-alliance free set of cardinality t(r − 1) + 1 in the graph G 1 G 2 , that is, φ d 0 (G 1 G 2 ) = t(r − 1) + 1. This example also shows that this bound is better than the bound obtained in Remark 86, which is t r 2 + 1. In this particular case, both bounds are equal if and only if r = 2 or r = 3. Theorem 89. [43] Let G i = (V i , E i ) be a graph and let S i ⊆ V i , i ∈ {1, 2}. If S 1 × S 2 is a k-daf set in G 1 G 2 and S 2 is a defensive k ′ -alliance in G 2 , then S 1 is a (k − k ′ )-daf set in G 1 . Taking into account that V 2 is a defensive δ 2 -alliance in G 2 we obtain the following result. Corollary 90. [43] Let G i = (V i , E i ) be a graph, i ∈ {1, 2}. Let δ 2 be the minimum degree of G 2 and let S 1 ⊆ V 1 . If S 1 × V 2 is a k-daf set in G 1 G 2 , then S 1 is a (k − δ 2 )-daf set in G 1 . By Theorem 87 (i) and Corollary 90 we obtain the following result.
2013-08-09T11:47:56.000Z
2013-08-09T00:00:00.000
{ "year": 2013, "sha1": "3d7c5303ba69963bafdfa9f7fc9fb20b18ecaa18", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3d7c5303ba69963bafdfa9f7fc9fb20b18ecaa18", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
11039754
pes2o/s2orc
v3-fos-license
Health Literacy in Rural Areas of China: Hypertension Knowledge Survey We conducted this study to determine levels and correlates of hypertension knowledge among rural Chinese adults, and to assess the association between knowledge levels and salty food consumption among hypertensive and non-hypertensive populations. This face-to-face cross sectional survey included 665 hypertensive and 854 non-hypertensive respondents in the rural areas of Heilongjiang province, China. Hypertension knowledge was assessed through a 10-item test; respondents received 10 points for each correct answer. Among respondents, the average hypertension knowledge score was 26 out of a maximum of 100 points for hypertensive and 20 for non-hypertensive respondents. Hypertension knowledge was associated with marital status, education, health status, periodically reading books, newspapers or other materials, history of blood pressure measurement, and attending hypertension educational sessions. Hypertension knowledge is extremely low in rural areas of China. Hypertension education programs should focus on marginal populations, such as individuals who are not married or illiterate to enhance their knowledge levels. Focusing on educational and literacy levels in conjunction with health education is important given illiteracy is still a prominent issue for the Chinese rural population. Introduction China is a large country with a population of about 1.339 billion, 50.3% of whom reside in rural areas [1] and has the largest hypertensive population in the World [2,3]. From 1991 to 2002, hypertension prevalence increased from 16.3% to 21.0% among urban adults, and from 11.1% to 18.0% among rural adults [2,4,5]. The increase in prevalence in rural areas is due to economic development, resulting in the adoption of urban lifestyles, as well as improved case-finding [6,7]. Hypertension-related diseases cost 31.89 billion Yuan Renminbi (RMB, approximately 4.8 billion dollars US) per year and result in about 11.4 years of life lost [8,9]. Chronic disease prevention has been identified as a high public health need in China's 2009 health reforms, with national and local governments allocating 15 Yuan (approximately 2.2 dollars US) per person per year for basic public health services, including chronic disease prevention, with high priority placed on hypertension prevention and management [10]. In rural areas, primary care physicians are required to measure blood pressure, register and follow-up with patients, and provide hypertension education [11,12]. Rural areas in northeast China have the highest rates of hypertension and stroke in the country [6,7]. To eliminate the geographic disparity, efficient hypertension prevention, control, and management programs must be developed in these areas under the Chinese public health initiative. Health literacy is related to hypertension management, treatment and outcomes [13,14]. Andeseun et al. [14] surveyed 72 patients in newly initiated dialysis and found that the average reading of diastolic blood pressure was lower among patients with adequate health literacy skills than those without the skills. Thus, we conducted this study to evaluate the level of current hypertension knowledge, factors associated with hypertension knowledge, and the association between hypertension knowledge and the frequency of salty food consumption among hypertensive and non-hypertensive populations in rural areas of Heilongjiang province, China. Our findings are essential to identify the gaps in current hypertension knowledge, and thus to inform the development of effective health education programs for the prevention and management of hypertension. Study Population We conducted cross-sectional face-to-face surveys in hypertensive and non-hypertensive populations in the rural areas of Heilongjiang province, in the northeast of China. To obtain the two convenience samples, we first selected two counties (Fujin and Linkou), which were willing to cooperate with our survey and had convenient transportation access for our surveyors. Within each county, we categorized districts by economic development status (i.e., relatively poor and non-poor, annual GDP ≥ 3,000 Yuan/person), and selected two districts from each category, for a total of eight districts. In each of the eight districts, villages with a minimum population of 800 were categorized into low, medium and high economic development groups, and one village was chosen from each group, for a total of 24 villages. The level of economic development was determined by county health department staff, who coordinated this survey, and were familiar with the economic status in the county. This study included residents who were aged 30 years or older and resided in the village for at least five years. We excluded people who were incapable of participating in the survey due to mental or physical disorders, such as severe senile dementia and schizophrenia. Physicians in these villages register patients with hypertension. People without hypertension included in this study were those who were not recorded in the hypertension registry, were never diagnosed as hypertensive patients based on self-report, and had normal blood pressure measured at the time of this survey. A standardized mercury sphygmomanometer was used to measure blood pressure following the American Heart Association protocol [15], that is, performing three blood pressure measurements with the participant in the sitting position after five minutes of rest. In addition, participants were advised to avoid alcohol, cigarettes, coffee, tea, and exercise for at least 30 minutes before their blood pressure measurement. Data Collection Undergraduate medical students conducted face-to-face interviews from 23 to 26 July 2010, at village clinics and administration offices. Before the survey, the students were trained in survey administration and in blood pressure measurement, and had opportunities to practice interviewing. The survey, including questions on socio-demographic characteristics, health status, hypertension knowledge, attitude to health and disease prevention, and lifestyles, was tested and revised through a pilot study with 25 people (questionnaire available on request). Physicians in the 24 villages were paid by fee-for-service plan and incentivized by the number of hypertensive patients who were registered and managed. Thus, the registry is relatively complete and frequently updated. We discussed our study population criteria with village physicians who registered hypertensive patients. These physicians reside in the villages and know the residents well. Nearly all the residents in the villages are farmers. We asked village physicians to invite all hypertensive patients and non-hypertensive residents in the 24 villages to participate in the survey. We attempted to survey the same number of hypertensive and non-hypertensive individuals in each village. Because of the convenience sampling strategy, the response rate was unknown, although the majority of hypertensive patients were likely surveyed. Interviewers explained the purpose and confidentiality of the survey and then invited residents to participate; participation in the survey was accepted as oral consent. If there was more than one eligible person in a family, the first arrival to the interview site was interviewed. The completeness of questionnaires was checked immediately after the survey was administered and if there was missing information; individuals were resurveyed before they left the survey site. Study Variables Hypertension knowledge was assessed using an instrument which had been validated in a previous study [16]. The instrument contains 10 questions: In addition, we collected information on socio-demographic characteristics (sex, age, education level, and marital status), self-perceived physical and mental health (5-point Likert-type scale of excellent, very good, good, fair, and poor), quality of life using the EuroQol-5 [17], which was translated into Chinese [18,19] , and the presence of 11 physician-diagnosed chronic diseases (i.e., liver disease, lung disease, peptic ulcer disease, renal disease, arthritis, chronic back pain, diabetes, neurological disorder including stroke, cancer, allergy, and depression). Non physician-diagnosed chronic diseases were not included because the validity of self-diagnosed chronic disease depends on the level of the respondent's knowledge, and their perceptions of "disease" and "health". Physician-diagnosed chronic disease was further confirmed by questions about the type of hospital where the diagnosis was made. Information was also collected on when blood pressure had been measured (within past 12 months, more than 12 months ago or never). Additionally we collected information on whether residents attended a hypertension educational session provided by clinicians and the provincial ministry of health in the last 6 months (yes, no), taken care of self health (yes, no), frequency of reading newspapers, magazines or books in the last month (at least once a week, less than once a week or never) and source of hypertension knowledge. Salty food consumption was defined as the frequency of eating salty pickled vegetables in the last week (none, 1-2 times, 3-4 times, 5-6 times and daily). In the region, salty pickled vegetables are very common as a side dish. Our survey was conducted in summer. Fresh vegetables are in abundance in summer and cost far less than in the winter season. Nearly all families store a large amount of salty pickled vegetables for winter. It was assumed that if respondents ate the salty pickled vegetable frequently in summer, they would eat it more frequently in winter. Thus determining the frequency of eating such a salty food is one way to assess the effectiveness of hypertension education programs in the region. Statistical Analysis Descriptive statistics were employed to describe characteristics of respondents with and without hypertension. A score of ten was assigned for each correct response, giving a maximum score of 100 per respondent. The hypertension knowledge score was found to be skewed and was normalized by the natural logarithm transformation. Multivariate linear regressions were used to determine factors associated with hypertension knowledge following a step-wise modeling strategy. A linear regression model with hypertension knowledge as the dependent variable and one independent variable (such as age) was used to preliminarily assess a relationship between hypertension knowledge and the added independent variable. We repeated this modelling method by taking out the previously added variable and adding another independent variable until the association of the hypertension knowledge and each of the independent variables was examined. After this initial analysis, all independent variables were fitted by forward inclusion in a linear regression model to form a full or main effect model. Sequentially, one independent variable that was neither biologically meaningful nor statistically significant was removed from the full model. Then another insignificant independent variable was removed and previous variable was added to the model to observe changes of coefficient for each variable in the model to assess correlation among independent variables. We repeated this step until all variables were observed. Finally, independent variables that were not associated with hypertension knowledge (p < 0.05) were removed to form the parsimonious model. Multivariate logistic regressions were used to assess responses to the question of "eating less salt usually makes blood pressure go down" (correct versus incorrect), and frequency of eating salty pickled vegetables in the last week (≥1 time versus less) and the association between hypertension knowledge and blood pressure targets (≥140/90 yes or no based on blood pressure measurements at the survey). In these two logistic regressions, all covariates collected in the survey were treated as potential confounding and adjusted. Results Of the 1,519 individuals who participated in the survey, 665 were hypertensive and 854 were non-hypertensive respondents (see Table 1). In both groups, a majority of the respondents were 50 to 64 years old, were married, and had a low level of education. Chronic disease was more frequent among hypertensive than among non-hypertensive respondents (71.6% vs. 54.8%). Quality of life scores were worse for hypertensives than non-hypertensive respondents. The percentage of respondents with correct responses to the hypertension knowledge questions ranged from 18.0% to 71.9% among hypertensive respondents and 11.8% to 63.0% among non-hypertensive respondents (see Table 2). Of the respondents, 83.0% of hypertensive and 89.8% of non-hypertensive respondents had a score of fewer than 50 points out of a possible 100. Furthermore, 12.0% of hypertensive and 25.1% of non-hypertensive respondents had scores of 0, that is incorrect answers to all 10 questions (see Figure 1). The average hypertension knowledge score was 25.6 out of the maximum 100 points for hypertensive and 20.0 for non-hypertensive respondents (see Table 3). Only a small proportion of respondents correctly answered questions about hypertension complications (i.e., 36.5% for stroke, 38.9% for heart attack, 18.0% for kidney disease and 27.9% for eye disease among hypertensive respondents and 31.2% for stroke, 32.1% for heart attack, 11.8% for kidney disease and 19.6% for eye disease among non-hypertensive respondents). The average score varied by age, education and marital status. Table 3. Average hypertension knowledge score (standard deviation, SD) out of the maximum 100. The factors associated with hypertension knowledge for both groups included marital status, education, when blood pressure had been measured, receiving hypertension education, and reading books, newspapers and magazines (see Table 4). In addition to these factors, mental health status was related to hypertension knowledge in non-hypertensive respondents (lower level for respondents with poor mental health) and the presence of a chronic disease was related to higher hypertension knowledge in hypertensive respondents. The major source of hypertension knowledge was health practitioners (62.4% for hypertensive and 32.2% for non-hypertensive respondents) and media (33.4% for hypertensive and 37.2% for non-hypertensive respondents). The frequency of eating salty pickled vegetables in the last week was lower among those who answered the question "eating less salt usually makes blood pressure go down" correctly than those who answered the question incorrectly (66.1% versus 70.1%, risk adjusted odds ratio: 0.834 and 95% confidence interval (95% CI): 0.578-1.202 among hypertensive respondents and 70.6% versus 77.2%, odds ratio: 0.710 and 95% CI: 0.514-0.979 among non-hypertensive respondents, see Table 5). Table 5. Percentage and risk adjusted odds ratio (OR) * with its 95% confidence interval (95% CI) for eating salty pickled vegetable at least once in the last week by correct and incorrect response to the question "eating less salt usually makes blood pressure go down". Two logistic regression models were fit, one for hypertensives and another for non-hypertensives. Dependant variable is eating salty pickled vegetable at least once in the last week (yes or no); exposure variable is response to the question "eating less salt usually makes blood pressure go down" (correct or incorrect); other independent variables included variables listed in Table 1, such as sex, age, marital status, education, quality of life, chronic disease, self-reported mental and physical health, and caring about self-health. Non-hypertensives (N = 854) Score (SD) p-value Among hypertensive respondents, 48.3% had a systolic/diastolic blood pressure ≥140/90. The proportion slightly decreased with higher hypertension knowledge. However, the differences were not statistically significant after adjustment for covariates (see Table 6). Table 6. Percentage and risk adjusted odds ratio (OR) * with its 95% confidence interval (95% CI) for systolic/diastolic blood pressure ≥ 140/90 among hypertensive respondents. hypertension knowledge level score based on the ten hypertension knowledge questions (10 score for each correct answer); We adjusted for variables listed in Table 1, such as sex, age, marital status, education, quality of life, chronic disease, self-reported mental and physical health, and caring about self-health. Discussion This survey clearly demonstrates that hypertension knowledge levels are extremely low in rural areas of northeast China. Although the level is statistically different between hypertensive and non-hypertensive respondents (mean score of 26 versus 20 out of a potential score of 100, p < 0.05), the difference is small. Many people lacked knowledge about hypertension complications and medication. Among non-hypertensive respondents, knowledge level was related to dietary behavior: people who knew about the relation between salty food and blood pressure ate salty food less frequently than those who did not know the relation. Hypertension knowledge is extremely low, regardless of hypertension status, in our sample. Astonishingly, 12% of hypertensive and 25% of non-hypertensive respondents had a score of 0, incorrect answers to all 10 questions. The hypertension knowledge level in China rural areas we reported is much lower than previous reports from western countries [20][21][22][23][24][25][26][27]. For example, Sanne et al. [16] surveyed 296 hypertensive adults in the New Orleans metropolitan area using the same 10-item questionnaire as we used, and reported that 50% of the respondents answered at least seven questions correctly, and the correct response rate ranged from 41.9% to 98.0%, much higher than the 11.8%-63.0% range in our hypertensive respondents. Ayotta et al. [25] assessed hypertension knowledge using six questions among 1177 hypertensive patients in the United States. The study found correct response rates ranged from 43.9% to 93.1% across the six items, with 92.2% correctly answering the question of hypertension causing kidney problems, much higher than our finding of 18.0% on that item. Our study demonstrated that hypertension knowledge levels were associated with marital status, education, health status, periodically reading books, newspapers or other materials, history of blood pressure measurement, and attending hypertension educational sessions provided by clinicians. However, the knowledge levels did not vary by age or sex. In our sample, 77.3% of hypertensive and 62.3% of non-hypertensive respondents were illiterate or had elementary schooling. Those who were illiterate had poorer hypertension knowledge than those with elementary or higher levels of schooling. This is congruent with our additional finding that people with regular reading habits had higher hypertension knowledge levels. These findings indicate that hypertension education programs should pay attention to adults with low literacy levels. In fact, 62.4% of hypertensive and 32.2% of non-hypertensive respondents acknowledged that they received knowledge from their health practitioners. In the villages we studied, physicians or nurses regularly provide health education to residents, including hypertension education. We found that respondents who participated in the educational sessions had better hypertension knowledge than those who did not. In addition, hypertensive respondents with chronic diseases are likely to have better knowledge than those without, suggesting that healthcare providers take the opportunity to provide hypertension education when they have contact with their patients. Residents in rural areas generally do not go for regular health examinations unless they have a health problem, and thus have few opportunities to learn about hypertension. Our study evidenced that current educational programs have positive effects. Respondents reported that village health practitioners are the major source of hypertension information and respondents who attended educational sessions had better knowledge than those who did not attend. Therefore, hypertension educational sessions should be continuously provided by village health practitioners, such as physicians and nurses, and should further target the marginal populations such as individuals who are not married, are illiterate or have poor mental health status. Since health practitioners live in the village, they are familiar with each individual's social and cultural characteristics and health status and are therefore in a position to identify these target groups. In addition, village residents are likely to trust them, and therefore attend and follow the instructions provided in the educational sessions. Non-hypertensive respondents who understood that salt was a risk factor for hypertension ate salty food less frequently than those who did not know that, illustrating that hypertension knowledge has an impact on preventive health behavior as reported by William et al. [20] There is an opportunity for educational programs to emphasize the long-term consequences of hypertension and to influence behavior. In this study, a majority of respondents were unaware of possible hypertension complications. For example, only 18% of the hypertensive and 11.8% of the non-hypertensive respondents knew hypertension could cause kidney disease. Only 49% of the hypertensive respondents knew that they needed to take medication daily. A lack of understanding of the long term health outcomes of hypertension may lead to poor compliance with antihypertensive medication; sufficient evidence indicates that patients have better drug-adherence if they are aware that an increased blood pressure could reduce their life expectancy [12,28]. Our study demonstrated that the proportion of hypertensive respondents who met blood pressure control targets (≥140/90) was slightly increased with higher levels of hypertension knowledge. The survey found that respondents obtained hypertension knowledge mainly from the village clinics. Therefore, it is imperative to enhance roles of rural physicians in health literacy promotion. First, health professionals including nurses should be incentivized to lead inhealth promotion and education activities. Second, the Chinese government has invested resources for chronic disease management and control. A certain amount of these resources should be specifically allocated to hypertension. Third, blood pressure monitors should be installed at pharmacies and clinics for free measurement. Fourth, education program should target high risk populations and sodium reduction. The essential information about hypertension should be disseminated through radio, mobile phone text massage, and posters at the village clinics. This study has limitations. First, we interviewed people who came to interview sites and were unable to assess an exact response rate. If this convenience sample had better health knowledge than those who were not surveyed, our findings about current hypertension knowledge levels could be over-estimated. Second, we missed people who were temporarily away from these villages for various reasons, such as working in urban areas. These people were deemed to be relatively healthy. Third, we surveyed two counties in the Heilongjiang Province of China; generalizing our findings to other regions should be done with caution. Fourth, we assessed salt consumption using frequency of eating salty pickled vegetable as a proxy. This method does not accurately quantify total salt consumption. Nevertheless, our study is consistent with previous studies in China [29,30], which reported low hypertension knowledge in rural areas. Our study also has several strengths. The interview was conducted by medical students who are knowledgeable about hypertension and survey methods. The quality of data is likely reliable. Finally, hypertension was determined by a combination of blood pressure measurement, self-report, and a hypertension registry. Conclusions In conclusion, hypertension knowledge levels are alarmingly low in rural areas of China, particularly concerning hypertension complications and medication. Many factors contribute to this low hypertension knowledge level, such as the availability of health education programs, economic conditions, and cultural background. Promoting national education levels should be a national priority, because illiteracy is still a common issue for this population and must be overcome in order to improve health education. Knowledge content deficiencies that we identified could guide development and improvement of educational programs for rural populations with the goals of increasing the awareness of hypertension, promoting blood pressure monitoring, and actively managing the disease.
2016-03-01T03:19:46.873Z
2013-03-01T00:00:00.000
{ "year": 2013, "sha1": "c38049fe063cf3316419849396d85f614e05db48", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/10/3/1125/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c38049fe063cf3316419849396d85f614e05db48", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
33295560
pes2o/s2orc
v3-fos-license
Unilateral Discomfort Increases the Use of Contralateral Side during Sit-to-Stand Transfer Individuals with unilateral impairment perform symmetrical movements asymmetrically. Restoring symmetry of movements is an important goal of rehabilitation. The aim of the study was to evaluate the effect of using discomfort-inducing devices on movement symmetry. Fifteen healthy individuals performed the sit-to-stand (STS) maneuver using devices inducing unilateral discomfort under the left sole and left thigh or right sole and right thigh and without them. 3D body kinematics, ground reaction forces, electrical activity of muscles, and the level of perceived discomfort were recorded. The center of mass (COM), center of pressure (COP), and trunk displacements as well as the magnitude and latency of muscle activity of lower limb muscles were calculated during STS and compared to quantify the movement asymmetry. Discomfort on the left and right side of the body (thigh and feet) induced statistically significant displacement of the trunk towards the opposite side. There was statistically significant asymmetry in the activity of the left and right Tibialis Anterior, Medial Gastrocnemius, and Biceps Femoris muscles when discomfort was induced underneath the left side of the body (thigh and feet). The technique was effective in causing asymmetry and promoted the use of the contralateral side. The outcome provides a foundation for future investigations of the role of discomfort-inducing devices in improving symmetry of the STS in individuals with unilateral impairment. Introduction It is known that individuals with a unilateral impairment such as stroke show a characteristic asymmetry of gait, posture, and weight bearing in favor of the nonparetic leg [1][2][3][4]. This leads to the learned disuse of the more affected side of the body, a condition where patients learn to use the stronger side of their bodies while neglecting the weaker side [2]. While this compensation may be expedient for some patients, learned disuse can also lead to greater muscle weakness on the affected side resulting in poorer performance of daily activities [5]. The standard approach to minimize the learned disuse of the upper limb is the constraint-induced movement therapy (CIMT) [6]. CIMT is an approach in which a patient's stronger limb is constrained in order to force the patient to use the weaker limb. This approach has been successful in restoring function to the upper limbs of patients with stroke, traumatic brain injury, and other disorders [7][8][9]. However, CIMT, as it was developed, is restricted to treating the upper limb and no equivalent therapy has been created to target the lower limb [10]. A possible reason for this is that movements generated by the lower limb such as locomotion and sit-to-stand maneuver are bilateral movements which cannot be adequately performed if both limbs are nonfunctional, one due to impairment and the other because of constraint [11]. As a result, constraining the stronger lower limb may not produce the desirable result of forced use in a patient with unilateral movement disorders. Nevertheless, the success of the CIMT prompted the development of many forms of "forced use" therapies and has made it possible to apply forced use to the lower limbs. One of those approaches is the Compelled Body Weight Shift Therapy (CBWST). CBWST involves the use of a shoe insert that establishes a lift of the nonaffected lower extremity to force the patient to shift their body weight towards the more 2 Rehabilitation Research and Practice affected lower extremity [10]. The CBWST approach involving the use of a flat (smooth) lift under the nonparetic leg has been found to improve stance weight bearing symmetry in individuals with stroke [10,12,13]. Multisession therapy using the CBWST approach has also been found to be helpful in the restoration of symmetry of stance and in improvement of gait velocity in individuals with acute stroke [14] and chronic stroke [15]. Approaches used to facilitate the utilization of the more impaired lower limb during sit-to-stand involve asymmetric positioning of the lower limbs [16] and the use of blocks below the unaffected feet similarly to CBWST [17]. Thus, it has been shown that both asymmetrical feet placement and blocks are able to increase weight bearing of the more impaired limb in patients with a hemiparetic stroke performing sit-to-stand task. Another approach to facilitate forced use of a limb is the utilization of nociceptive feedback via induced discomfort [18]. Unilateral discomfort has been shown to cause changes in postural control and movement control in healthy adults and also in patients with neuromuscular deficits during locomotion and quiet standing [19]. However, no studies involving experimentally induced discomfort have been performed during the STS task, an important activity of daily living which many patients have difficulty performing. In this study, we aimed to determine the feasibility of using a new approach of inducing unilateral discomfort in order to produce forced use of the contralateral side during the performance of the STS task. If unilateral discomfort brings about asymmetry of the performance of the STS in healthy individuals, the approach could potentially be beneficial to individuals with unilateral impairment. Thus, our hypothesis was that, during STS, healthy adults will exhibit movement asymmetry when discomfort is induced unilaterally under their thigh and foot. We also hypothesized that when discomfort is induced on the left side, movement will be greater on the right side and vice versa. Subjects. Fifteen healthy young adults (8 males, 7 females, 26.7 ± 3.9 years old, height 162.8 ± 8.9 cm, and body mass 66.0 ± 13.0 kg) participated in the study. All subjects were rightlimb dominant. They all signed a written informed consent approved by the Institutional Review Board. 2.2. Protocol. The subjects were required to sit in a chair positioned on a force platform with both of their feet placed on the top of the platform. The chair (66.0 cm high, 58.5 cm wide and 48.3 cm deep) had a nondeformable wooden seat, arm rests, and no back support. Each subject sat in the chair with a knee flexion angle of 90 degrees and an elbow flexion of 90 degrees. Subjects were required to perform sit-to-stand maneuver with arm support and with or without unilateral discomfort. The experimental protocol began with a baseline (no discomfort condition) followed by two randomized conditions: (1) standing up using arm support in the presence of discomfort induced on the left side (foot and thigh) (LC) and (2) standing up using arm support with discomfort induced on the right side (foot and thigh) (RC). Discomfort was induced by tapered devices beneath both the thigh and foot: on the seat under the thigh (approximately 50% distance from the hip joint to the knee joint) and in a standard sandal provided for each subject. The thigh device was a set of 3 evenly spaced pyramidal metal protrusions (base 30 × 40 mm, top 17 mm, and height 35 mm) with center to center distance of approximately 50 mm. The base of the set was 2 mm high with the total height of the device being 37 mm. The foot device was an insole made of polyvinyl chloride embedded with 32 small 3 mm high pyramidal peaks with center to center distance of approximately 10 mm. The base of the insole was 1 mm high with the total height of the insole being 4 mm [19]. Each sit-to-stand trial consisted of sitting for approximately three seconds, standing up at a self-selected speed, and standing for approximately three seconds. Three trials were performed in each experimental condition. In each condition, subjects were asked to rate the level of their perceived discomfort using a 10 cm linear (with one end (0) marked as "no discomfort at all" and the other end (10 cm) as "worst discomfort ever") Visual Analogue Scale (VAS) [20]. Data Collection and Processing. Three-dimensional kinematic data was collected using a six-camera VICON 612 system (Oxford Metrics, UK). Retroreflective markers were placed over anatomical landmarks bilaterally according to the Plug-In-Gait (PIG) model (Oxford Metrics), which includes second metatarsal head, calcaneus, lateral malleolus, lateral epicondyle of the femur, a marker on the lateral border of the leg (between the lateral malleolus and femoral epicondyle markers), anterior/posterior superior iliac spines, a marker on the lateral border of the thigh (between the femoral epicondyle and anterior superior iliac spines), second metacarpal, lateral epicondyle of the humerus, acromioclavicular joint, and a marker on the lateral border of the arm (between the humeral epicondyle and the acromioclavicular joint markers). Also, subjects wore head and wrists bands with four and two markers attached on them, respectively. Finally, five additional markers were attached over the following landmarks: 7th cervical vertebra (C7), 10th thoracic vertebra, inferior angle of the right scapula, between the two sternoclavicular joints, and xiphoid process of the sternum bone. A lower and upper limb model which estimated joint centers was created using the Plug-In-Gait (VICON) software. The kinematic data obtained from 15 subjects was then filtered with a low pass 4th-order Butterworth filter with a cutoff frequency of 2 Hz. The center of mass (COM) was computed using a rigid body model constructed with fourteen segments [21]. The trunk movement was characterized as the movement of the C7 marker in the rigid body model. The ground reaction forces and moments of forces were collected via a force platform (Model OR-5, AMTI, USA); the signals were sampled at 5000 Hz. The data was then filtered with a low pass 4th-order Butterworth filter with a cutoff frequency of 20 Hz. The center of pressure (COP) data was computed using methods described in the literature [21]. Electromyographic (EMG) activity of muscles was recorded from the Tibialis Anterior (TA), Medial Gastrocnemius (MG), Rectus Femoris (RF), and Biceps Femoris (BF) bilaterally. Based upon recommendations reported in previous literature [22], disposable electrodes (Red Dot 3M) were attached to the muscle belly of each muscle after cleaning the skin with alcohol wipes. A ground electrode was attached to the anterior aspect of the leg over the tibial bone. EMG signals were collected from nine subjects. The signals were filtered and amplified (10-500 Hz, gain: 2000) with the EMG system (Myopac, RUN Technologies, USA). The raw signals were filtered with a high pass 2nd-order Butterworth filter with a cutoff frequency of 20 Hz. The signals were then full wave rectified and filtered with a low pass 2nd-order Butterworth filter with a cutoff frequency of 2 Hz. The onset of muscle activity was determined by an algorithm which detected the moment when muscle activity surpassed baseline activity [23]. The amplitude of the muscle activity was computed as the integral of the muscle activity from the movement onset to standing and latency was computed as the difference between the movement onset and muscle activity onset. The VICON 612 data station controlled data collection of all signals: forces, moments of force, and EMG signals were acquired at 5000 Hz and kinematic data were collected at 100 Hz. Data Analysis. Center of mass position, trunk movement, center of pressure position, and muscle activity were used to quantify the movement. To determine asymmetry, the maximum displacement of each movement variable to the left and right in the discomfort conditions was computed and compared. Before comparing the maximum displacement, each movement variable was normalized by subtracting the magnitude of the variable during the baseline condition from the magnitude of the same variable in the discomfort conditions. To ensure that the portion of the movement being subtracted was in phase, sitting, stand up, and standing phases of the task were normalized (via interpolation) to 100% of the period before the movement normalization was performed. The determination of the start and end of the sitting, stand up, and standing phases was done using the ground reaction force data [24] and validated by the center of mass velocity [25]. To determine the symmetry of the muscle activity, the activity in the left muscle was compared to the activity in the right muscle. All data analysis was performed using MATLAB R2014 b (MathWorks, MA, USA). Statistical Analysis. A paired Student's -test was used to determine if the discomfort levels induced in both conditions were statistically significantly different by comparing the VAS scores ( = 0.05). A paired Student's -test was used to determine if the maximum displacements of the COP, COM, and trunk to the left or right were significantly different between both conditions ( = 0.05). The differences in muscle activity between left and right muscles in each condition were examined using a paired Student's -test ( = 0.05). All statistical analyses were performed using SPSS v23 (IBM, NY, USA). Discomfort Levels. The level of the perceived discomfort in the baseline (no discomfort) and RC and LC conditions were 0 cm, 1.7 cm, and 1.5 cm, respectively. The discomfort level in each of the RC and LC conditions was statistically different from the baseline condition ( < 0.05). There was not statistically significant difference between RC and LC conditions ( > 0.05). Duration of STS Performance. The duration of STS in the baseline condition was 2.94 ± 0.88 sec; the durations of STS in the LC and RC conditions were 1.91 ± 0.55 and 1.53 ± 0.27 sec, respectively. Relative to baseline, the durations of STS in the LC and RC were statistically significant ( < 0.05). However, the difference between the LC and RC conditions was not statistically significant. Center of Mass Displacement. The maximum displacement of the center of mass (COM) to the right relative to the baseline in the LC condition was 0.020 ± 0.005 m. The maximum COM displacement to the right relative to the baseline in the RC condition was 0.011 ± 0.018 m. The difference between the LC and RC was not statistically significant. The maximum COM displacement to the left relative to the baseline in the LC condition was close to zero. The maximum COM displacement to the left relative to the baseline in the RC condition was 0.004 ± 0.016 m. The difference between the LC and RC was not statistically significant. Trunk Displacement. The maximum displacement of the trunk to the right relative to the baseline in the LC condition was 0.024 ± 0.02 m. The maximum displacement of the trunk to the right relative to the baseline in the RC condition was 0.0147 ± 0.02 m. The difference between the LC and RC was statistically significant ( < 0.05) (Figure 1). The maximum displacement of the trunk to the left relative to the baseline in the LC condition was 0.01 ± 0.02 m. The maximum displacement of the trunk to the left relative to the baseline in the RC condition was 0.03 ± 0.02 m. The difference between the LC and RC was statistically significant ( < 0.05). Center of Pressure Displacement. The maximum displacement of the center of pressure (COP) to the right relative to the baseline in the LC condition was 0.06 ± 0.06 m. The maximum COP displacement to the right relative to the baseline in the RC condition was 0.05 ± 0.07 m. The difference between the LC and RC was not statistically significant. The maximum COP displacement to the left relative to the baseline in the LC condition was 0.03 ± 0.02 m. The maximum COP displacement to the left relative to the baseline in the RC condition was 0.03 ± 0.03 m. The difference between the LC and RC was not statistically significant. 3.6. EMG Activity. The latencies of the left and right TA muscles in the LC were 0.47 ± 0.06 s and 0.41 ± 0.04 s, respectively (Table 1). This difference was statistically significant ( < 0.05). In the RC conditions the latencies of the left and right TA muscles were 0.39 ± 0.13 s and 0.33 ± 0.09 s, respectively. The difference however was not significant. For the left and right MG muscles the latencies in the LC condition were 0.61 ± 0.15 sec and 0.48 ± 0.11 s, respectively. This difference was statistically significant. The latency of the left and right MG muscles in the RC were 0.37 ± 0.23 s and 0.23 ± 0.28 s, respectively. This difference was not statistically different. The latency of the left BF muscle in the LC was 0.48 ± 0.04 s and it was 0.41 ± 0.04 s, for the right BF muscle. This difference was statistically significant ( < 0.05). The latencies of the left and right BF muscles in the RC were 0.38 ± 0.13 s and 0.33 ± 0.08 s, respectively. The difference, however, was not significant. For the left and right RF muscles the latencies in the LC condition were 0.32 ± 0.06 s and 0.50 ± 0.06 s, respectively. This difference was statistically significant. The latency of the left and right RF muscles in the RC was 0.38 ± 0.08 s and 0.46 ± 0.15 s, respectively. This difference was not statistically significant ( > 0.05). Integrals of EMG activity of the left and right leg muscles are shown in Figure 2. In general, the activity of a muscle on the side contralateral to the side of the induced discomfort increased indicating asymmetrical pattern. Thus, the integral of the EMG activity of the left Tibialis Anterior (TA) muscle in the LC (a condition with the discomfort induced on the left side) was 180.6 ± 64.9 mV * s and it increased to 213.1 ± 68.4 mV * s in the right TA ( < 0.05). The integral of the left TA muscle in the RC was 278.1 ± 151.9 mV * s and it decreased in the right TA to 266.9 ± 174.6 mV * s. However, this difference was not statistically significant. The integral of the left Medial Gastrocnemius (MG) muscle in the LC was 46.66 ± 12.91 mV * s; it was 67.02 ± 17.57 mV * s in the right MG ( < 0.05). The integrals of the left and right MG muscles in the RC were 60.55 ± 18.68 mV * s and 57.36 ± 17.57 mV * s, respectively. This difference was not statistically different ( > 0.05). The integral of the left Biceps Femoris (BF) in the LC was 100.4 ± 16.1 mV * s and was 130.8 ± 48.2 mV * s in the right BF ( < 0.05). The integrals of the left and right BF muscles in the RC were 180.5 ± 110.4 mV * s and 134.3 ± 125.3 mV * s, respectively. This difference was not statistically different ( < 0.05). In opposition to the trend, the integrals of the left and right Rectus Femoris (RF) in the LC were 103.1 ± 14.4 mV * s and 86.88 ± 26.3 mV * s, respectively. This difference was not statistically significant. The integrals of the left and right RF muscles in the RC were 110.3 ± 28.14 mV * s and 97.9 ± 40.37 mV * s, respectively. This difference was also not statistically significant. Discussion The aim of this study was to determine whether the device inducing unilateral discomfort increases the use of the contralateral limb in adults performing sit-to-stand task. We hypothesized that, during STS, healthy young adults will exhibit movement asymmetry when discomfort is induced unilaterally under their thigh and foot. The study demonstrated that when experiencing unilateral discomfort, subjects utilized asymmetrical trunk movements and increased the activation of the lower limb muscles on the side opposite to the side of the induced discomfort. Thus, the hypothesis that healthy young adults will exhibit movement asymmetry and thus increased contralateral limb use, when discomfort is induced unilaterally under their thigh and foot, was supported. Asymmetrical loading during STS is reported in people with unilateral impairment, for example, those who underwent transtibial amputation [26], total knee arthroplasty [27], and anterior cruciate ligament reconstruction [28] and in individuals with stroke [29,30]. It is described in the literature that when individuals with stroke performed the STS with the paretic foot placed behind the healthy foot, they improved the symmetry of their movement [31]. Moreover, when the unaffected foot of individuals with stroke was placed on a small lift, the EMG activity of muscles in the affected limb recorded during the STS increased and decreased in the unaffected limb [32]. Similar increase in the EMG activity in the muscles on the side opposite to the side of the induced discomfort observed in the current study suggests that the approach indeed could be beneficial to individuals with unilateral impairment. It is reported in the literature that individuals with a unilateral stroke perform the sit-to-stand task significantly slower than healthy controls [33]. Moreover, it was described that individuals with stroke shortened the rise time after sit-to-stand training in which the feet were positioned asymmetrically (the paretic foot placed posterior) [34]. The subjects in the current study performed STS faster while being exposed to discomfort. Moreover, healthy subjects experiencing discomfort demonstrated changes in the performance of the STS task seen as asymmetrical movements of the trunk as well as the reported asymmetrical pattern of activation of leg muscles. As such, it is tempting to suggest that individuals with unilateral stroke exposed to discomfort on the nonaffected side could perform the sit-to-stand task a bit 6 Rehabilitation Research and Practice faster. This suggestion, however, should be tested in experiments involving individuals with stroke. There are some study limitations. First, the level of discomfort induced in each subject was not customized which resulted in a wide range of discomfort. For this study, performing a regression analysis would not have allowed us to glean a meaningful result. However, future studies should aim to mathematically describe how increasing the level of discomfort affects movement asymmetry and to determine the important variables which control discomfort levels. Secondly and finally, this study focused on healthy young adults and the immediate effect of discomfort on performing the sit-to-stand task. Conclusions When healthy subjects were provided with the discomfortinducing devices, they performed the sit-to-stand task asymmetrically. The results suggest that if the discomfort is induced on the unaffected side of individuals with unilateral impairment, it can help such individuals to regain the ability to rise from a chair more symmetrically. The outcome of the study provides a foundation for the investigation of the effect of discomfort-inducing devices in rehabilitation of people with unilateral impairments. Conflicts of Interest The authors declare that there are no conflicts of interest regarding the publication of this paper.
2018-04-03T02:21:03.430Z
2017-04-26T00:00:00.000
{ "year": 2017, "sha1": "680f61a650c11567c057835ea8173839a53b5ea7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2017/4853840", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1df2c4098925ade403252bcd3479b48d9fd8e3d9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233661046
pes2o/s2orc
v3-fos-license
Post‐extubation dysphagia and dysphonia amongst adults with COVID‐19 in the Republic of Ireland: A prospective multi‐site observational cohort study Abstract Objectives This study aims to (i) investigate post‐extubation dysphagia and dysphonia amongst adults intubated with SARS‐COV‐2 (COVID‐19) and referred to speech and language therapy (SLT) in acute hospitals across the Republic of Ireland (ROI) between March and June 2020; (ii) identify variables predictive of post‐extubation oral intake status and dysphonia and (iii) establish SLT rehabilitation needs and services provided to this cohort. Design A multi‐site prospective observational cohort study. Participants One hundred adults with confirmed COVID‐19 who were intubated across eleven acute hospital sites in ROI and who were referred to SLT services between March and June 2020 inclusive. Main Outcome Measures Oral intake status, level of diet modification and perceptual voice quality. Results Based on initial SLT assessment, 90% required altered oral intake and 59% required tube feeding with 36% not allowed oral intake. Age (OR 1.064; 95% CI 1.018–1.112), proning (OR 3.671; 95% CI 1.128–11.943) and pre‐existing respiratory disease (OR 5.863; 95% CI 1.521–11.599) were predictors of oral intake status post‐extubation. Two‐thirds (66%) presented with dysphonia post‐extubation. Intubation injury (OR 10.471; 95% CI 1.060–103.466) and pre‐existing respiratory disease (OR 24.196; 95% CI 1.609–363.78) were predictors of post‐extubation voice quality. Thirty‐seven per cent required dysphagia intervention post‐extubation, whereas 20% needed intervention for voice. Dysphagia and dysphonia persisted in 27% and 37% cases, respectively, at hospital discharge. Discussion Post‐extubation dysphagia and dysphonia were prevalent amongst adults with COVID‐19 across the ROI. Predictors included iatrogenic factors and underlying respiratory disease. Prompt evaluation and intervention is needed to minimise complications and inform rehabilitation planning. | BACKG ROU N D The SARS-CoV-2 virus (termed COVID-19) is a novel respiratory virus, which has led to an international pandemic. COVID-19 has resulted in an unprecedented number of critically ill adults, which has overwhelmed intensive care unit (ICU) services worldwide. Endotracheal intubation and mechanical ventilation have been a central management procedure for critically ill patients with in ICU settings. Post-extubation dysphagia (PED) and dysphonia are common in critical care patients, 1,2 and recent research has highlighted PED and dysphonia within the COVID-19 population. 3,4 In a single-centre observational cohort study, 79% of adults referred to SLT over a twomonth period during COVID-19 had been intubated and all patients who had persistent dysphagia at discharge had been intubated. 3 In another single-centre study, 50% (N = 204) adults admitted into the intensive care unit with COVID-19 infection were referred to SLT for a swallow assessment. 4 Of these patients, 33% required diet modification and 67% were not allowed oral intake. 4 Iatrogenic causes of dysphagia include prolonged intubation 5 and intubation injury including laryngeal oedema, granulations, ulceration and vocal cord immobility. 6 In a recent study of twenty patients with COVID-19 infection who underwent laryngeal endoscopy, the most common laryngeal complications were voice-related complaints, breathing and swallowing. 6 All participants who underwent laryngoscopy in this study presented with abnormal findings, and the most common diagnoses were vocal cord immobility, posterior glottic stenosis and subglottic stenosis. 6 The majority of these patients had been intubated with an average duration of three weeks during their inpatient admission. Moreover, all patients who had been proned during intubation presented with glottal pathology. 6 Tracheostomy insertion can lead to aspiration risk and difficulties managing secretions. 7 In a recent prospective study involving forty-one adults with COVID-19 infection who had a tracheostomy inserted post-extubation, 19% had severe pathology on laryngeal examination. 7 Of note, the vast majority of participants had the tracheostomy inserted beyond fifteen days of oral intubation. 7 1.018-1.112), proning (OR 3.671; 95% CI 1.128-11.943) and pre-existing respiratory disease (OR 5.863; 95% CI 1.521-11.599) were predictors of oral intake status postextubation. Two-thirds (66%) presented with dysphonia post-extubation. Intubation injury (OR 10.471; 95% CI 1.060-103.466) and pre-existing respiratory disease (OR 24.196; 95% CI 1.609-363.78) were predictors of post-extubation voice quality. Thirty-seven per cent required dysphagia intervention post-extubation, whereas 20% needed intervention for voice. Dysphagia and dysphonia persisted in 27% and 37% cases, respectively, at hospital discharge. Discussion: Post-extubation dysphagia and dysphonia were prevalent amongst adults with COVID-19 across the ROI. Predictors included iatrogenic factors and underlying respiratory disease. Prompt evaluation and intervention is needed to minimise complications and inform rehabilitation planning. K E Y W O R D S COVID-19, dysphagia, dysphonia, intubation, post-extubation, speech and language therapy, swallowing, voice Key Points 1. Post-extubation dysphagia and dysphonia are multifactorial and can lead to prolonged ICU stay, prolonged tube feeding, aspiration pneumonia and increased morbidity and mortality. 2. In this multi-site prospective cohort study across eleven acute hospitals, 90% of adults required an altered oral diet post-extubation and 36% were not allowed oral intake based on SLT evaluation. Sixty-six per cent presented with post-extubation dysphonia. 3. Age, proning and pre-existing respiratory disease were predictors of post-extubation oral intake status, whereas intubation injury and pre-existing respiratory disease were predictors of post-extubation dysphonia. 4. Over a third (37%) required dysphagia intervention postextubation, whereas 20% needed intervention for voice. Dysphagia and dysphonia persisted in 27% and 37% cases, respectively, at hospital discharge, indicating that speech and language therapists should be included in outpatient multidisciplinary COVID clinics in the community. Over half of the participant group presented with dysphonia whereas 30% reported dysphagia, although 83% were on a normal diet. 7 Other potential factors contributing to PED are delirium, 8 proning, 6,9 disuse atrophy and critical illness neuropathy or myopathy during ICU stay 10 and neurological manifestations of COVID- 19. 11 Central and peripheral nervous system complications of COVID-19 include stroke, encephalitis and Guillain-Barre syndrome. These can damage the neurological swallow network, contributing to dysphagia amongst COVID-19 survivors. 11 PED is associated with worse outcomes in ICU including aspiration pneumonia, prolonged tube feeding, delayed initiation of oral intake, prolonged hospitalisation and increased morbidity and mortality. 12,13 Dysphonia is another recognised complication of intubation reported amongst adults with COVID-19. | Study design This multi-site prospective observational cohort study is reported according to the STROBE guidelines for observational cohort studies. 19 Ethical approval for this study was obtained from National Research Ethics Committee (NREC) (20-NREC-COV-051). | Settings In this multi-centre observational cohort study, speech and language therapists from eleven acute hospitals across Ireland participated. | Participants All adults admitted into a participating acute hospital in the ROI with COVID-19 and referred to SLT were included. Inclusion criteria were | Independent variables Demographic data included age, gender, pre-admission medical comorbidities and premorbid swallow status. During hospital admission, data on neurological manifestations of COVID-19 were captured by speech and language therapists from hospital records. The patient's most recent chest X-ray at time of initial SLT assessment was rated by speech and language therapists based on medical entries into clinical notes using a validated five-point ordinal scoring system provided in dataset dictionary. 20 | Swallow and voice outcomes Swallowing and voice outcomes were influenced by curtailed access during the COVID-19 pandemic to instrumental assessments typically used in intensive care settings such as fibreoptic endoscopic evaluation of swallowing (FEES). Presence and severity of PED was measured by speech and language therapists using the Functional Oral Intake Scale (FOIS). 22 The FOIS is a validated 7-point ordinal rating scale with high inter-rater reliability. 22 A second proxy dysphagia measure used by speech and language therapists was food and fluid consistencies required for dysphagia management, using the International Dysphagia Diet Standardisation Initiative (IDDSI). 23 Voice quality was evaluated by speech and language therapists using the overall Grade (G) score from the GRBAS scale. 24 The scale has established high rater reliability and is widely used in clinical research. 25 | Data sources/management One nominated speech and language therapist from each hospital site was responsible for data entry at each location. A dataset and dataset dictionary were emailed to named speech and language therapists from each participating site. Speech and language therapists were instructed to populate the dataset prospectively and return the anonymised data to the first authors for analysis. | Bias To minimise observer bias, all clinicians used outcome measures routinely used in clinical practice with established rater reliability. Clear rules and procedures were in place for data collection, and data were clearly defined in a data dictionary provided to all settings. Merged data were anonymised to researchers. | Study size Patients who meet the eligibility criteria over the three-month data collection period were included in the study. The study size was determined by the prevalence of cases and in particular those needing respiratory support. Statistical advice was obtained regarding recruitment numbers and statistical power for the sample. | Statistical analysis Descriptive statistics were reported using medians and interquartile range (IQR) for continuous data. Categorical variables were presented as frequency (percentage). Variables were tested for normality using the Shapiro-Wilk test. To establish associations between dependent and independent variables, Spearman's rho correlations were conducted. To determine the trajectory of dysphagia and dysphonia from initial SLT assessment to SLT discharge, medians of ordinal dependent variables at both time points were compared using two-tailed Wilcoxon signed rank tests. To determine independent predictors of oral intake status at time of initial SLT assessment, a binary logistic regression was used. For voice quality, an ordinal logistic regression was completed with the overall (G) four-point ordinal GRBAS rating as the dependent variable. Six independent variables were selected for the voice regression model (intubation injury, proning, maximum cuff pressure, duration of intubation, number of comorbidities and history of respiratory disease). For both regression models, independent variables were selected based on evidence from previous research and a visual review of the data. Where a significant association was identified between independent variables (eg duration of intubation and presence of tracheostomy), only one was selected for a model. Mean imputation was made for one independent variable (maximum cuff pressure) as it was missing 24/100 cases. Model fits were confirmed using likelihood ratio chi-squared tests. A two-sided α of less than 0.05 was considered statistically significant. Statistical analyses were completed using the SPSS (v26) software. | Participants Data from 100 adults with PCR test confirmed COVID-19 infection who were intubated across eleven acute hospitals in ROI and referred to SLT between March and June 2020 inclusive were included in analysis. The 100 adults (69% male) had a mean age of 62 years (age range 17-88 years). Further demographic details are in Table 1. Table 2. Missing data for some of these variables were due to limited access to data across clinical settings. | Presence, severity and trajectory of swallowing and voice outcomes post-extubation Median time between extubation and initial SLT evaluation was 4 days (IQR 2-11 days). Ninety per cent (n = 90) of patients presented with dysphagia (FOIS Level 1-6) at initial SLT assessment with 36% not allowed oral intake (FOIS Level 1) ( Table 3). Median FOIS score at initial SLT assessment was 2.5 (SD 2.139; range 1-7) (n = 100). IDDSI fluid and food consistency findings are detailed in Table 3. A significant negative correlation was observed between oral intake status at initial SLT assessment as rated by the FOIS and ICU length of stay (ICULOS) (r = −.227; p = .028) and also between oral intake status and hospital LOS, indicating that the lower the FOIS score, the longer the LOS (r = −.363; p = .000). Median voice quality rating also altered significantly from initial SLT assessment (GRBAS score 1) to SLT discharge (GRBAS score 0) (z = −5.619; p = .000). Details regarding alteration in swallow and voice outcomes are in Table 3. | SLT intervention needs and services provided Over a third (n = 37/100) of patients required dysphagia intervention post-extubation (Table 6). In 70% (26/37) of these cases, dysphagia intervention was implemented, although 19% (7/37) had it provided in adapted form due to infection risk related to the pandemic. In 11% (4/37) of cases, dysphagia intervention was indicated but could not be provided at point of service delivery due to the pandemic service con- (Table 6). In 30% (6/20) cases, voice intervention was indicated but could not be implemented also due to pandemic service constraints. | DISCUSS ION In this study, 90% of patients intubated as part of COVID-19 management across the ROI who were referred to SLT presented with new onset PED based on oral intake status. Over half (59%) required tube feeding based on SLT assessment, and over a third were not allowed oral intake post-extubation. This high rate of PED compares with recent research. 3,4 Post-extubation oral intake status was associated with length of ICU stay and hospital stay duration in this study, with reduced oral intake associated with longer duration of ICU and hospital admissions. There was a threefold increase in impact on oral intake status with proning in this study. Lower cranial nerve paralysis and oropharyngeal oedema have previously been linked to proning, and cranial nerves IX to XII are hypothesised to be affected by proning. 6,26 Pre-existing respiratory disease was also identified as a positive predictor of PED in this study. Adults with respiratory disease may already have altered respiratory swallow coordination, which could be exacerbated post-extubation. There was approximately a 6% increase in the relative odds of oral intake status change per year of age in this study. Older people may have a pre-existing presbyphagia, which pre-disposes them to PED. Furthermore, frailty and sarcopaenia may also be prevalent amongst older people, which could contribute towards PED. In contrast to previous research, 5 duration of intubation was not predictive of oral intake status in this study. This may be due to the fact that patients with tracheostomy were not excluded in this study, as researchers aimed to capture all adults with COVID-19 post-extubation. Additionally, prolonged intubation duration with COVID-19 may explain contrasting findings to previous PED research. There was a tenfold increase in impact on voice quality for those with intubation injury, which aligns with previous research. 2: Tube dependent with minimal attempts at food or liquid. 14 0 3: Tube dependent with consistent oral intake of food or liquid. 9 0 4: Total oral diet of a single consistency 2 0 5: Total oral diet with multiple consistencies, but requiring special preparation or compensations. 10 6: Total oral diet with multiple consistencies without special preparation, but with specific food limitations. Post-extubation dysphonia and dysphagia research is needed from future pandemic waves to establish the impact of evolving intensive care management and mutating virus variants on voice and swallowing outcomes. Post-discharge time points to capture longer term voice and swallowing difficulties would guide multidisciplinary service delivery in the community. | CON CLUS IONS This study highlights the prevalence of post-extubation dysphagia and dysphonia amongst adults intubated with COVID-19. Awareness of the predictors of altered swallowing and voice quality postextubation will promote early in-depth evaluation and monitoring during hospital stay. Prompt dysphagia and dysphonia evaluation and management is needed to minimise clinical and quality of life complications. ACK N OWLED G EM ENTS Thanks to participants across clinical settings for agreeing to contribute data for this study. Thanks also to all the speech and language therapists across data collection sites who assisted with local data collection for the purposes of this research. D I S C LO S U R E S TAT E M E N T Authors have no disclosures to report. CO N FLI C T S O F I NTE R E S T None. AUTH O R CO NTR I B UTI O N J. Regan and M. Walshe designed the study, applied for ethical approval, analysed the data and wrote the paper. All other authors contributed to the study design, acquired and transferred data for analysis and contributed to data analysis and interpretation. All authors gave approval for paper to be submitted for publication. DATA AVA I L A B I L I T Y S TAT E M E N T Authors are unable to share data due to ethical approval restrictions. E TH I C S S TATEM ENT Ethical approval for this study was obtained from National Research Ethics Committee (NREC) (20-NREC-COV-051).
2021-05-05T00:08:35.993Z
2021-03-22T00:00:00.000
{ "year": 2021, "sha1": "076c516712cf2d2aa7cac8872fcb8820ed336980", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8444742", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "1ba6547740bf836e3e1dbf74907203a0ec4bd766", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247904320
pes2o/s2orc
v3-fos-license
Applying PET-CT for predicting the efficacy of SBRT to inoperable early-stage lung adenocarcinoma: A Brazilian case-series Summary Background Stereotactic body radiotherapy (SBRT) is a treatment option for early-stage inoperable primary lung cancer. Here we report a thorough description of the prognostic value of pre-SBRT SUVmax for predicting the efficacy of SBRT in early-stage lung adenocarcinoma. Methods This is a retrospective study of consecutive cases of early-stage inoperable lung adenocarcinoma, staged with PET-CT, treated with SBRT between 2007 and 17. Kaplan-Meier (KM) curves were used to assess overall survival and compare time to event between those with PET-CT SUVmax values ≤ 5.0 and those > 5. Fisher's Exact tests and the Mann-Whitney U were used to compare the patient and clinical data of those with SUVmax≤5.0 and >5.0, and those with and without any failure. Findings Amongst 50 lung carcinoma lesions, from 47 patients (34 (68%)-T1a or <T1b), estimated median overall survival from the KM was 44.9 months (95% confidence interval 35.5–54.3). Five experienced a local failure, which was inadequate for detecting differences between those with PET-CT SUVmax ≤5.0 and those >5 (p = 0.112). In addition, 5 experienced a regional failure and 4 a distant failure. Higher PET-CT SUVmax values before SBRT were associated with an increased risk of any failure (36% versus 0%, p = 0.0040 on Fisher's Exact test) and faster time to event (p = 0.010, log rank test). Both acute and late toxicities profile were acceptable. Interpretation Patients with early-stage inoperable lung adenocarcinoma present good clinical outcomes when treated with SBRT. We raised the hypothesis that the value of PET-CT SUVmax before SBRT may be an important predictive factor in disease control. Funding None. Introduction Lung cancer has the highest incidence in the world, corresponding to 11.6% of the total cancer cases diagnosed in 2018. 1 Non-small cell lung cancer (NSCLC) represents 80% of all cases of lung cancer, with adenocarcinoma being the most common NSCLC subtype (40 −50% of lung tumors). 2,3 For patients with early staged (stage I and II) NSCLC, who are inoperable or who present with multiple comorbidities and/or older age, stereotactic body (SBRT) or stereotactic ablative radiotherapy (SABR), is a well-established treatment option. 4,5 Previously a different case series reported on 54 elderly (median age 75 years) lung cancer patients treated with SBRT. 6 All patients were considered clinically inoperable (mainly due to multiple comorbidities). The results were comparable to those of the main published series, with 90% of local control and 80% of overall 2-year survival, respectively. 6 Despite the rate of local failure after SBRT being low, understanding the histological and imaging characteristics of NSCLC have become important for prognostic definition and management. In the literature, multiple surgical series described that local failure was lower in early-stage NSCLC patients with adenocarcinoma when compared to other histologies (squamous cell carcinoma or large cell). 7−9 A meta-analysis including thirteen studies indicated that both before-RT and after-RT primary lesion higher Maximum Standard Uptake Values (SUVmax) seen on the Positron emission tomography with 2-deoxy-2-[fluorine-18]fluoro-D-glucose integrated with computed tomography (FDG-PET-CT) can negatively impact the outcome of patients with non-metastatic NSCLC treated with RT. 10 SUVmax integrates knowledge on metabolic and biological activity of the tumour and has been used as a prognostic marker in NSCLC. Some specialized teams have endeavoured to describe the significance of pre-SBRT SUVmax value in the setting of stage I-II NSCLC. 11 However, to our knowledge, there is a lack of data reporting on lung adenocarcinoma alone. Here we report a thorough description of the prognostic value of pre-SBRT SUVmax value on predicting the efficacy of SBRT in early-stage lung adenocarcinoma. Methods We performed a Research Ethics Board approved (REB number HSL 2014-30 under the registry FYdM221) retrospective study which was carried out in the radiotherapy department of Hospital S ırio-Libanês (São Paulo, SP, Brazil). We assessed consecutive lung cancer patients treated between January 2007 and May 2017. We included patients with (1) Biopsy-proven primary lung adenocarcinoma; (2) staged as T1 or T2N0M0, T3N0M0 (more than one lesion in the same lobe) or T4N0M0 (more than one lesion involving lobes other than the ipsilateral lung) according to the 8th edition of the TNM of the International Union for Cancer Control (UICC); (3) staged with FDG-PET-CT and CT (for better T stage definition); (4) considered inoperable by a multidisciplinary team; (5) treated with SBRT. We excluded patients with (1) tumors > 5 cm; (2) pulmonary metastases from other primary tumors; (3) diagnosis of idiopathic pulmonary fibrosis or (4) in treatment for another cancer. The article was organised based on The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) recommendations. Demographic, clinical, tumor-related and dosimetric data were collected. Performance status was assessed using the Eastern Cooperative Oncology Group (ECOG) scale (ECOG-PS). Data on comorbidities and cause of non-operability were also collected in addition to the pre and post SBRT FDG-PET-CT SUVmax and the tumor characteristics in CT scans. To perform lung SBRT, a semi-rigid system for patient positioning and immobilizing was developed and piloted (2007) at our department. As of June 2012, commercial accessories (BodyFIX; Elekta, Stockholm, Sweden) were used for all treatments. All patients were planned with a 3D conformal technique, using noncoplanar fields. Treatment planning followed the protocols of RTOG-0236 (NCT00087438) or RTOG-0813 (NCT00750269). Total dose and fractionation scheme were defined considering the location of the tumor (Figure 1-Imaging representation of the bronchial tree zone), size and the best evidence available at the time of treatment: most commonly, three fractions were used for peripheral lesions and five for central lesions. The dose delivery was performed through stereotactic coordinates by means of using image guided radiotherapy (IGRT) with cone beam computed tomography (CBCT), Research in context Evidence before this study For patients with early staged lung cancer, who are inoperable, stereotactic body (SBRT), is a well-established treatment option. However, the prognostic value of pre-SBRT SUVmax value in predicting the efficacy of SBRT in early inoperable staged lung cancer patients has not been broadly investigated, especially in Brazil. Added value of this study The current study is the first to report the prognostic value of pre-SBRT SUVmax value in inoperable lung adenocarcinoma treated with SBRT in Brazil. Our findings suggest that patients with adenocarcinoma treated with SBRT have good clinical outcomes. The study also raised the hypothesis that the pre-SBRT PET-CT SUVmax value has potential as a predictor of disease control. Implications of all evidence available The article highlights that new radiation technologies are applicable outside high-income countries. It also suggests a role for PET-CT SUVmax in predicting failure post SBRT treatment. Finally, it provides benchmark data for future studies in the region. Articles where not only the patient, but the internal tumor position was evaluated immediately before SBRT. Full description of our department planning, and treatment scheme has been previously described. 6 Endpoints The study primary endpoint was to assess the prognostic value of the pre-SBRT SUVmax value for predicting local failure in early-stage lung adenocarcinoma treated with SBRT. The SUVmax was categorized as ≤ 5.0 versus > 5.0 on the basis of a previous meta-analysis. 10 Secondary outcomes included overall survival, incidence of regional, distant, and any failure, and toxicity profile. All outcomes were assessed from the date of the last SBRT fraction to the last follow-up or death date. The outcomes were defined as follows: -Local failure: recurrence in a region of up to 1 cm around the treated planning target volume (PTV) which was defined in the presence of one of the following criteria: (1) CT scan with a mass pattern with consolidation patterns increasing in size (cranialcaudal growth on CT imaging ≥25%) without inflammatory signs; and/or (2) FDG-PET-CT study with increased SUVmax uptake in the lung lesion over time (At least 1 PET CT, with 30 days or more from SBRT). Biopsies were not performed due to high risk of complication (elderly patient, inoperable and with multiples comorbidities) or if patient declined it. -Regional failure: recurrence more than 1 cm away from the PTV, within the parenchyma (same pulmonary lobe) or central structures of the mediastinum. -Distant failure: metastasis in contralateral lobe or distant organs. -Acute (≤ 6 months) and late (> 6 months) toxicities were defined based on the Common Toxicity Criteria for Adverse Events (CTCAE) v4.0 and by medical evaluation. -Any treatment failure: local, regional, or distant failure The study primary endpoint was to describe the prognostic value of pre-SBRT SUVmax value on predicting local failure in early-stage lung adenocarcinoma treated with SBRT. Statistical analysis Descriptive analysis of patients and lung lesions was performed. Categorical variables were summarized by frequency and percentage and quantitative variables by median with quartiles. Follow-up was defined as the time from the end of the SBRT until the death or last seen. Surviving patients were censored on the date of the last chest image. Age, ECOG-PS, size of the lesion in centimetres, duration of SBRT (time interval between the first and last fraction of SBRT), pre and post SBRT FDG-PET-CT SUVmax of ≤ 5.0 versus > 5.0, 10 dosimetric factors, BED 10 , and response to FDG-PET-CT SUV max were assessed in two ways. Initially their associations with PET SUV ≤5.0 versus > 5.0 were assessed using the Fisher's Exact test (using a two-tailed probability based on double the exact one-tailed probability) for categorical and ordinal data, and the Mann-Whitney U for continuous data. Then their association with any failure (local, regional or distant) was assessed using the same methodology. The number of events was too small for multivariable analysis. Kaplan-Meier curves with the log rank test were used to compare survival time and time to any failure for PET SUV ≤5.0 versus > 5.0. Statistical significance was established at p < 0.05 and no adjustment was made for multiple comparisons. Statistical analysis was performed using the Stata software, version 13.0 (StataCorp, College Station, TX) and IBM SPSS, version 27 (Armonk, NY, 2020). Role of the funding source There was no funding for this work. We also provided a comparison of patient characteristics for those with PET SUV_max ≤ 5.0 versus those > 5.0 (n = 47, Table 1), and none differed significantly. Table 2 provides the same comparison for lung lesions (n = 50), and no lung lesion characteristics differed significantly between the two groups. However, the rate of PET-CT done at follow-up was higher for those with PET SUV_max > 5 (p = 0.017). The median follow-up for this cohort was 19.1 months (IQR 12.9−33.8). The estimated median overall survival from the Kaplan-Meier curve was 44.9 months (95% CI of 35.5−54.3). Figure 2 provides the survival curves for the 19 patients with SUV_max ≤5.0 versus the 28 with SUV_max > 5.0 and suggests that the two groups are very similar in terms of overall survival (log rank test p = 0.804). Over the total follow-up, we observed 14 (30%) deaths, 6 (43%) being related to lung cancer and 8 (57%) not related. although the numbers were insufficient to demonstrate a statistically significant difference (p = 0.112). In addition, there were 5 (5/47 per patient − 11%) regional failures, 4 distant failures (4/47 per patient − 8%), being 1 liver and peritoneum, 1 liver and bone; 1 liver; 1 liver and brain, and 10 (10/47 per patient − 21%) any failures observed. Figure Among 47 patients assessed for acute toxicities, 18 (38%) grade 1 or 2 acute toxicities were identified. The reported toxicities were 12 (24%) grade 1 pneumonitis (asymptomatic -diagnosis by image only), 5 (10%) grade 2 pneumonitis (symptomatic, but without the need for supplemental oxygen) and 1 (2%) grade 2 chest pain (moderate pain, limiting daily activities). Additionally, we observed 1 (2%) grade 3 pneumonitis (severe respiratory symptoms that limit daily activities in which the use of oxygen is recommended), for which the patient required hospitalization due to concomitant pneumonia (both resolved) and 1 (2%) grade 4 skin dermatitis (skin necrosis or full thickness dermis ulceration) for which the patient needed local treatment with resolution of the dermatitis. Among 47 patients assessed for late toxicities, 6 (13%) grade 1 or 2 late toxicities were observed, 4 (8%) representing grade 1 chest pain (mild pain without impacting daily activities) and 2 (4%) reporting grade 1 and 2 pneumonitis, respectively. No rib fractures were identified. The results of univariate analyses performed to explore factors related to any failure (local, regional, and distant) are shown in Table 3. The FDG-PET-CT SUV max value at diagnosis was associated with a significantly higher rate of failure (36% versus 0%, p = 0.0040). Discussion Our results reinforce that the use of SBRT for the treatment of inoperable early-stage adenocarcinoma of the lung in elderly patients is safe and effective. SBRT provides prolonged median survival, with low 12-month failure rates. Our primary outcome, number of local Articles failures (5), was too low to demonstrate any statistically significant findings. However, when combined with the regional and distant failures, we were able to report a potential association between the pre-SBRT FDG-PET-CT SUVmax value SBRT and any failure (p = 0.003), particularly since there were no failures in the 19 patients with SUV max below 5 at diagnosis. Our survival rates finding are important and encouraging. In our study the estimated median OS was 44.9 months which is better than prospective studies such as RTOG 0236, with a median OS of 36 months and RTOG 0813, with a median OS of 38 months; but lower when compared to RTOG 0915, with a median OS of 55 months (arm treated with 48 Gy in 4 fractions), respectively. 12−14 It is important to highlight that the patients included in our study presented with adenocarcinoma, a median age of 76 years, 85% had at least one clinical comorbidity, 40% were considered inoperable and a minority received systemic therapy. Thus, we believe that the discrepancy in median OS when compared to the RTOG studies may be related to patient selection, systemic therapy at the time of failure and percentage of adenocarcinoma included (percentage of adenocarcinoma included: RTOG 0236 − 39%, RTOG 0813 − 35%, RTOG 0915 − 58% (longest survival). 12−14 In the present study we reported 8 (16%; total 50 lesions) local failure and 18 (38%; total 47 patients) any failure events. According to a recent literature review of patients with stage I-II NSCLC treated with SBRT, locoregional recurrence rates are between 0% and 18%, and distant recurrence rates are between 17% and 34% in the first 2 years, which is consistent with our results. 15 When we compare our results with the Cleveland Clinic retrospective series that compared local failure of squamous-type NSCLC versus adenocarcinoma, we noticed that the incidence of local failure was higher in our study, 16% in 2 years versus 8.7% for the adenocarcinoma group of the Cleveland Clinic study. 16 The reported difference in local control is probably related to patient's selection and criteria for local failure definition. In the Cleveland Clinic study, local failure was defined by radiographic progression on CT followed by at least one FDG-PET-CT examination. 16 If failure was defined by imaging, biopsy of the lesion was performed and post biopsy, only 40.3% actually had local recurrence. In our study, local failures were defined by CT in most cases, only 50% of patients underwent FDG-PET-CT at follow-up, and confirmatory biopsies were not performed. Due to the previous points, we may have overestimated our local failures, as we know from the Cleveland Table 3: Factors associated with any failure (local, regional or distant) for PET-CT staged lung adenocarcinoma treated with SBRT. Values are reported as frequencies (%) and medians (quartiles). * P-value is listed as indicative since length of risk of failure will differ between patients, and hence the strict assumptions for the standard tests are not satisfied. P-values are based on the Fisher's Exact test using a two-tailed probability based on double the exact one-tailed probability), and the Mann-Whitney U, also based on double the exact one-tailed probability. Clinic study that 59.7% of the cases considered local failures by image were false positives. However, when we analyzed the pre-SBRT FDG-PET-CT SUVmax value, this proved to be an interesting parameter of prognostication on our cohort. No patients with pre-SBRT FDG-PET-CT SUV max ≤ 5.0 developed local, regional or distant failure. A systematic review, involving 13 studies, showed that both the pre-RT SUVmax and the post-RT SUV max can predict outcomes in patients with NSCLC treated with radiotherapy. 10 The study also showed that patients with higher values of pre-RT SUVmax (> 5.0) usually have worse overall survival and reduced local control. 10 Additionally, some studies reported on the correlation of residual SUV uptake 3 months after SBRT and increased risk of local failure. 17,18 To our knowledge, when investigating the use of SBRT for early-stage lung disease and FDG-PET-CT we have 2 main published studies, however, none evaluated adenocarcinoma alone. First, the Memorial Sloan Kettering Cancer Center study included 219 stage I NSCLC lesions treated with SBRT and showed that SUVmax > 3.0 is associated with worse outcomes in overall survival, local failure and regional failure. 11 Second, the Mayo Clinic study which included 282 consecutive patients (99 adenocarcinoma) with early-stage NSCLC, showed that Pre-SBRT FDG-PET-CT SUV max value (continuous) was associated with the risk of any failure (p = 0.02) but not with local failure only (p = 0.69). 19 The results of these two studies are in accordance with our study findings and demonstrate the prognostic importance of pre-SBRT FDG-PET-CT. Regarding treatment related toxicity, our study showed that grade 1 pneumonitis (asymptomatic) was the most frequent acute adverse event (12 or 24% of the treated lesions). In the literature, acute grade 1 pneumonitis is described in 30−70% of cases treated with SBRT. 20 Grade 2 pneumonitis (symptomatic, but without the need for oxygen) was found in 10% of our cases, which is in accordance with published literature (around 10%). As less frequent acute toxicities, we observed 1 (2%) case of grade 2 chest pain (rate described in the literature around 6−10%), 1 (2%) case of grade 3 pneumonitis, which was justified by an overlapping lung infection (rate described in the literature around 3−5%) and 1 (2%) case of grade 4 skin toxicity (rare event in the literature, < 3%). 20 Regarding late toxicities, we observed only grades 1 and 2 toxicities (6 or 12% of the cases), which is in line with the percentages reported in the literature. 20 Although the findings reported here are important, our study has limitations that should be discussed and these include its retrospective design, limited sample size, limited number of patients who underwent FDG-PET-CT during follow-up and low statistical power for comparison between groups. Ideally this analysis would involve competing risks (mortality and failure) but the sample size and event size were too small for reliable estimates. Another limitation is the absence of confirmatory biopsy for local recurrence, but this is justifiable as we are reporting on a cohort of elderly inoperable and with multiple comorbidities patients who either were at high risk of patient or declined a biopsy. The lack of biopsy is not a major weakness due to the fact that CT evaluation is recognized in the literature as standard practice for follow up and failure definition. In addition, PET-CT is used for cases presenting with high-risk features of failure on the CT. Conclusion Patients with inoperable, early-stage adenocarcinoma of the lung showed good outcomes (low failure rates and prolonged survival) when treated with SBRT. FDG-PET-CT SUVmax value (≤ 5.0) was associated with better outcomes (i.e no failure). The incidence of both acute and late toxicities was low and acceptable. Larger prospective multi-center trials are required for validating our findings.
2022-04-03T16:21:24.717Z
2022-03-30T00:00:00.000
{ "year": 2022, "sha1": "55052e50d622b0929e36ceb085647b6e7157afb2", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.lana.2022.100241", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "94245095a9c1f4a24991c97bede7429a4f10af1a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267061008
pes2o/s2orc
v3-fos-license
A HITRAN-formatted UV line list of S$_2$ containing transitions involving $X\,^{3}\Sigma^{-}_{g}$, $B\,^{3}\Sigma^{-}_{u}$, and $B''\,^{3}\Pi_{u}$ electronic states The sulfur dimer (S$_2$) is an important molecular constituent in cometary atmospheres and volcanic plumes on Jupiter's moon Io. It is also expected to play an important role in the photochemistry of exoplanets. The UV spectrum of S$_2$ contains transitions between vibronic levels above and below the dissociation limit, giving rise to a distinctive spectral signature. By using spectroscopic information from the literature, and the spectral simulation program PGOPHER, a UV line list of S$_2$ is provided. This line list includes the primary $B\,^{3}\Sigma^{-}_{u}-X\,^{3}\Sigma^{-}_{g}$ ($v'$=0-27, $v''$=0-10) electronic transition, where vibrational bands with $v'$$\geq$10 are predissociated. Intensities have been calculated from existing experimental and theoretical oscillator strengths, and semi-empirical strengths for the predissociated bands of S$_2$ have been derived from comparisons with experimental cross-sections. The S$_2$ line list also includes the $B''\,^{3}\Pi_{u}-X\,^{3}\Sigma^{-}_{g}$ ($v'$=0-19, $v''$=0-10) vibronic bands due to the strong interaction with the $B$ state. In summary, we present the new HITRAN-formatted S$_2$ line list and its validation against existing laboratory spectra. The extensive line list covers the spectral range 21700$-$41300~cm$^{-1}$ ($\sim$242$-$461~nm) and can be used for modeling both absorption and emission. S 2 is a key intermediary in the exothermic polymerization of elemental sulfur en route to octasulfur, S 8 (Kasting et al., 1989;Shingledecker et al., 2020), the stable molecular form.The polymerization of sulfur towards S 8 can be interrupted by the photolysis of S 2 , and it has also been noted that S n O can photolyze to S n (Francés-Monerris et al., 2022).Although, formation pathways of polysulfur molecules (including S 2 ) in the atmosphere of Venus are still under debate (Francés-Monerris et al., 2022), it has been suggested that photodissociation of S 2 could fill a needed gap in Venusian atmospheric models (Francés-Monerris et al., 2022).While S 2 may only be an intermediary observation, its UV absorption can help infer sulfur chemistry even without the detection of S 8 .Hobbs et al. (2021) have investigated thermochemical and photochemical sulfur reactions in the atmospheres of warm and hot Jupiters.They found that at 10 −3 bar and at temperatures around 1000 K, mixing ratios of S 2 can be up to 10 −5 .The recent detection of CO 2 (JWST Transiting Exoplanet Community Early Release Science Team et al., 2023), and apparent SO 2 absorption feature (Rustamkulov et al., 2022), in the atmosphere of exoplanet WASP-39b has increased the need to better understand the spectroscopy of photochemically produced sulfur species.In particular, for WASP-39b it has been predicted that S 2 is a key molecule in the photochemical pathway to forming SO 2 and is expected to be the most abundant sulfur-containing molecule at pressures probed by JWST transmission spectra during the evening terminator (Tsai et al., 2022). While many works have investigated the energy levels and spectrum of S 2 , including the analysis of perturbations (e.g., Green and Western, 1996) and predissociated bands (e.g., Lewis et al., 2018), an accurate S 2 line list that is capable of reproducing the UV spectrum of S 2 at high resolution is currently unavailable in the literature or public databases.This was highlighted by Kim et al. (2003) when building their fluorescence model for analyses of cometary spectra.Kim et al. (2003) extended their atlas (Kim, 1994) by including limited experimental information from the literature, which included some direct entries from early experimental works (Ikenoue, 1953(Ikenoue, , 1960)).However, as acknowledged by the authors, their model was still very limited, particularly in terms of accounting for perturbations.Similarly, in analyses of Io spectra, Spencer et al. (2000) used an unpublished line list that was calculated for the purposes of their work.Perturbations were not accounted for and some of the experimental intensity information which now exists was not available at that time (e.g., Stark et al., 2018).Recently, Sarka and Nanbu (2023) have released an ab initio line list of S 2 , but spin-spitting and perturbation effects have not been accounted for, which limits the accuracy for high resolution applications. S 2 photolysis has previously been estimated in photochemical models based on inferences from solar system cometary analyses (de Almeida and Singh, 1986;Ueno et al., 2009;Hu et al., 2012) and have been scaled by the actinic flux at 300 nm.Use of this approximation can lead to inaccurate photolysis rates on planets orbiting stars of other spectral types (e.g., M-dwarfs), where the shape of the spectral energy distribution (SED) is different than the Sun.Another consequence is that it disregards the photochemical self-shielding, which is caused by overlying S 2 and/or other overlying molecules.Hobbs et al. (2021) employed the calculated cross-sections from the Leiden database (Heays et al., 2017), but it should be noted that the photodissociation cross-sections measured subsequently in Stark et al. (2018) differ substantially from those calculated in Heays et al. (2017). The goal of this work is to provide a reliable publicly available line list of S 2 .A line-by-line parameterization, such as that employed by the HITRAN database (Gordon et al., 2022), has advantages over available parametrizations for S 2 spectra.These line lists are compatible with a majority of community radiative transfer codes, thereby allowing the generation of cross-sections at a variety of thermodynamic conditions and over a wide spectral range.One of the peculiarities of parameterizing the UV line list of the sulfur dimer is that it has to be capable of simulating the UV spectrum at high resolution below the dissociation limit, while also reproducing the diffuse spectrum above the dissociation limit where rovibronic features can not be resolved.Indeed, the predissociation widths of the disulfur transitions that have upper levels above the dissociation limit have widths that reach tens of wavenumber, therefore obscuring the resolved structure of individual transitions.The S 2 line list from this work will, therefore, need to be capable of reproducing these effects. Spectroscopy of the S 2 molecule S 2 is isoelectronic to molecular oxygen, O 2 .The two molecules have similar electronic states and exhibit similar visible and UV bands.Thus, comparisons between S 2 and the much better studied O 2 system provide valuable insights when generating a comprehensive S 2 line list.Unlike oxygen, S 2 is an unstable molecule.In the laboratories it is produced in sulfur-containing flames and discharges since it mainly forms at high temperatures (800 K) (Wheeler et al., 1998).The B − X transition is quite intense and emits a blue color as seen in experimental and cometary spectra. Figure 1 of Sun et al. (2019) provides a detailed comparative overview of the lowest electronic states of S 2 and O 2 molecules.Considering that in this work, we concentrate only on the part of the UV spectrum, here, we provide only a brief summary of the states of S 2 .Just as in the case of molecular oxygen, the ground electronic state of disulfur is X 3 Σ − g .The lowest excited states are two singlet states a 1 ∆ g and b 1 Σ + g with T e = 4322.99and 7788.72 cm −1 , respectively (Xing et al., 2013).The electric dipole transitions involving these states and the ground state are spin-forbidden, however, much weaker transitions are possible through magnetic dipole and electric quadrupole mechanisms (Setzer et al., 2003;Fink et al., 1986).Above them lie the A 3 Σ + u , A ′ 3 ∆ u , and c 1 Σ − u states, which, in the oxygen molecule, are responsible for the so-called Herzberg bands.The electronic state of our interest is B 3 Σ − u with term energy T e = 31967.36cm −1 , and transition between this state and the ground state are analogous to the so-called Schumann-Runge bands in molecular oxygen.This state is in the close proximity to B ′ 3 Π g and B ′′ 3 Π u (Xing et al., 2020) states and the interference from the latter has to be taken into account in our line list.At higher energies, near 41 000 cm −1 , lie the e 1 Π g and f 1 ∆ u states.S 2 also has a number of unbound electronic states that cross the bound B 3 Σ − u state at increasing energy in the order 1 1 Π u , 1 3 Π g , 1 1 Π g , 1 5 Σ − g , 1 5 Π u , and 2 3 Σ + u (Sun et al., 2019;Wheeler et al., 1998). In the HITRAN database, the quantum notation of transitions between states obeying Hund's case (b) is assumed for molecules in triplet states (see Appendix of Gordon et al., 2022).Under this formulation, each rotational level N is split into three spin components with total angular momentum J=N+S, and this notation is adopted here for consistency.In case (b) formalism, the spin components are defined as F 1 : J = N + 1, F 2 : J = N , and F 3 : J = N − 1.However, as will be seen in the next section, unlike the case of O 2 , the spin-spin coupling constant of S 2 is much larger than the rotational constants causing the splitting of spin components of the individual rotational levels to be larger than the separation between these levels, therefore Hunds case (a) is more appropriate, at least for relatively low rotational levels.Indeed, although S 2 has a Σ ground state, the total electron spin is equal to 1, therefore the projection of total angular momentum on the internuclear axis, Ω = Λ + Σ (where Λ is a projection of orbital angular momentum L, and Σ is the projection of total electron spin S), can be 0 or 1.Therefore, S 2 can exhibit case (a) behavior.In case (a) formalism of intermediate coupling F 1,3 are a mixture of Ω = 0 and Ω = 1 components, whereas F 2 is pure Ω = 1.For the case when the spin components can be considered uncoupled (Hund's case (c)) F 1 corresponds to Ω = 0 only, whereas F 2,3 to Ω = 1.Just as for the 16 O 2 isotopologue in the ground electronic state, in 32 S 2 only levels with a total parity (+) are allowed due to nuclear spin statistics (nuclear spin of the 32 S is zero), therefore, in the Hund's case (b) formalism, every other rotational level (i.e., the even levels) are missing.In the excited B 3 Σ − u state, total parity (+) levels will correspond to only even rotational states being populated, and the odd levels are missing. Selection rules require ∆J to be equal to 0 or ±1 and 14 branches are possible for the B 3 Σ − u − X 3 Σ − g .Traditionally, in spectroscopic papers, the ∆N ∆J F ′ F ′′ (J ′′ ) notation for line assignments is used.In this notation, it is common to refer to six major branches between the same spin-components ( R R 11 , R R 22 , R R 33 , P P 11 , P P 22 , and P P 33 ) with eight weaker satellite branches ( N P 13 , R P 31 , P Q 12 , P Q 23 , R Q 32 , R Q 21 , P R 13 , and T R 31 ). However, in our line list, we employ the ∆N (N ′′ )∆J(J ′′ ) notation, which is closer in appearance to quantum notations given in traditional ASCII files in the static 160-character format in HITRAN.In this notation, seemingly only eight branches are possible for the B 3 Σ − u − X 3 Σ − g transition: N(N ′′ )P(J ′′ ), P(N ′′ )P(J ′′ ), P(N ′′ )Q(J ′′ ), P(N ′′ )R(J ′′ ), R(N ′′ )P(J ′′ ), R(N ′′ )Q(J ′′ ), R(N ′′ )R(J ′′ ), and T(N ′′ )R(J ′′ ), where N ′′ and J ′′ refer to the lower state values.However, both HITRAN and traditional notations are both able to uniquely identify each transition (e.g., T(5)R(6) or T R 31 (6) shown in Fig. 1).Although, one has to keep in mind that some of these "branches" in the HITRAN notation represent more than one branch in the traditional notation, for instance, R(N ′′ )R(J ′′ ) transitions can be R R 33 (J ′′ ), R R 22 (J ′′ ) and R R 11 (J ′′ ).Nevertheless, since they correspond to the transitions between different spin components, they can be uniquely identified with rotational and total angular momentum quanta in the case (b) framework.Fig. 1 demonstrates all branches of the B 3 Σ − u − X 3 Σ − g electronic transition with N ′′ = 5 where each individual transition has been labeled with both notations. The higher-lying electronic states of S 2 are severely perturbed and predissociated.The predissociated bands of S 2 are caused by the spin-orbit interaction with the crossing ungerade electronic states (Lewis et al., 2018) and by the B state crossing the dissociation limit after v≥10 (Green and Western, 1996).Predissociation causes individual rovibronic lines to have lifetime broadening (FWHM) of up to 50 cm −1 (Lewis et al., 2018;Wheeler et al., 1998).The lifetime broadening exceeds typical pressure broadening by up to three orders of magnitude and, therefore, results in a series of unresolved bands at higher frequencies.Although well described by the Lorentzian profile (as both pressure and predissociation line-widths are lifetime-driven broadening), the predissociation widths do not have the pressure and temperature dependence unlike pressure-broadened widths. The ungerade electronic states are responsible for contributing to the predissociation.Patiño and Barrow (1982) proposed that the primary perturber of the lower vibrational levels of the B state was the B ′′ 3 Π u state.Matsumi et al. (1984Matsumi et al. ( , 1985) ) were able to measure the transitions involving B ′′ 3 Π u state directly, and Figure 2 of Matsumi et al. (1985) effectively shows the nature of perturbations.For higher vibronic levels, Wheeler et al. (1998) have predicted that the 1 Π u state was the primary culprit for predissociating the v ′ ≤16 bands and also suggested that the 1 5 Π u state is responsible for predissociating the v ′ ≥17 bands.Lewis et al. (2018) also do not expect the B ′′ state to be the primary source of the significant predissociation for the bands v ′ = 11-16.Instead, they support that theory of Wheeler et al. (1998) that the primary perturbers are the 1 Π u and 1 5 Π u states (for the bands v ′ ≥12 and v ′ ≥17 respectively), along with 2 3 Σ + u state for the bands v ′ ≥23.There is a century-long history of experimental works that have analyzed the UV spectrum of S 2 (Naudé and Christy, 1931;Olsson, 1936;Ikenoue, 1953;Meakin and Barrow, 1962;Heaven et al., 1984;Matsumi et al., 1984Matsumi et al., , 1985;;Green and Western, 1996;Green and Western, 1997;Lewis et al., 2018).Large perturbations, which cause the regular patterns within branches to break, are a consequence of the B ′′ state.Historically, these perturbations had made high-resolution analyses difficult, but Green and Western (1996) and Green and Western (1997) were able to provide a deperturbed rotational analysis for the B − X and B ′′ − X transitions that could accurately model laser-induced fluorescence spectra.Wheeler et al. (1998) have investigated the predissociated bands of the B − X transition as well as the perturbing levels.More recently, cross-sections of S 2 have been measured at 370 K and 823 K using a synchrotron facility by Stark et al. (2018).These spectra were analyzed by Lewis et al. (2018) to develop a model for the UV spectrum of S 2 that includes the B − X predissociated bands. In each of these prior works, rotational constants, and model details have been provided, with some earlier works providing measured line positions.However, a line list that can be used directly in radiative transfer models was not available.Recently, an ab initio line list has become available (Sarka and Nanbu, 2023) that can be used to simulate the spectrum of S 2 .Spin-splittings and perturbation constants were not included, which has a number of limitations when used for high-resolution applications. Fig. 2 provides a schematic of the potential energy curves (PEC) of the X, B, and B ′′ states (Xing et al., 2020).Due to the Franck-Condon principle; respective positions of the PECs; and the fact that in excited vibrational states, the overlap integrals are most efficient near the walls of the potentials, the absorption from the ground state (v ′′ = 0) would be most efficient to the higher vibrational states of the excited state (blue shaded region).Due to the same logic, the emission from the v ′ = 0 of the B ′′ state will be most efficient to the excited vibrational levels of the ground electronic state (red-shaded region). The PGOPHER S 2 model The S 2 line list for this work has been built using the PGOPHER program (Western, 2017), which applies spectroscopic constants to calculate transition frequencies and intensities.Our S 2 model is, in large part, constructed from the analysis by Green and Western (1996) and Green and Western (1997) for rovibrational bands below the dissociation limit.In these works, laser-induced fluorescence spectra containing B − X transitions of S 2 were fit to a Hamiltonian that simultaneously accounted for perturbations caused by the interacting B ′′ state.In our work, some of these constants have been refit using observed emission lines of S 2 (Olsson, 1936;Ikenoue, 1960;Patiño and Barrow, 1982).There are also later experimental works that reported measurements of rovibronic lines of S 2 (Heaven et al., 1984;Matsumi et al., 1984Matsumi et al., , 1985)).However, they report only spectroscopic constants and not the actual line positions.Considering severe perturbations, it is impossible to recreate line positions from these constants without having the original program.Moreover, even with these details, these constants may not work.Anecdotally, Matsumi et al. (1984) have acknowledged that the constants provided in Table I of their paper are tentative and do not reproduce the observed line positions even at lower rotational quanta.In order to build global spectroscopic models, is imperative (Gordon et al., 2016) that the experimental papers provide original measured line positions along with fitted constants. Accurate modeling of the B − X transition requires consideration of the interacting B ′′ 3 Π u state of 32 S 2 .In our model, deperturbed constants (T v , B, D, λ, γ, the spin-orbit coupling A, and the Ω = 0 lambda doubling constant o) are included for the B ′′ v ′ = 0-11 (Green and Western, 1996) and B ′′ v ′ = 12-21 (Green and Western, 1997) levels. As discussed above, the B ′′ − B perturbation is essential to provide accurate positions for lines within each interacting band.The Hamiltonian model of Green and Western (1996) accounts for perturbations of vibronic levels through interacting spin-orbit (α) and L-uncoupling parameters (β), along with the corresponding centrifugal distortion parameters (α D , β D ).PGOPHER allows the inclusion of the B ′′ − B interaction parameters from Green and Western (1996) and Green and Western (1997), but a difference in definition requires α and α D from these work to be multiplied by −3 √ 2. In conjunction with the PGOPHER fitting for the X and B states indicated Table 1: Deperturbed spectroscopic parameters used in the PGOPHER model for v ′′ = 0-10 vibrational levels of the X 3 Σ − g state.All values are provided in wavenumber (cm −1 ) and have been refit using line positions in Olsson (1936), Ikenoue (1960), and Patiño and Barrow (1982).Those values unchanged from Green and Western (1996) have been indicated.Green and Western (1996). b Calculated from Huber and Herzberg (1979) and Barrow and Yee (1974), see text for details. Table 2: Deperturbed spectroscopic parameters used in the PGOPHER model for v ′ = 0-3 vibrational levels of the B 3 Σ − u state.All values are provided in wavenumber (cm −1 ) and have been refit to line positions in Olsson (1936), Ikenoue (1960), and Patiño and Barrow (1982).above, the perturbation constants for the interacting B ′′ v ′ − Bv ′ = 2-0, 3-1, 4-1, 4-2, 5-2 states were also refitted using line positions from Olsson (1936), Ikenoue (1960) and Patiño and Barrow (1982) and are given in Table 3.All other perturbation constants were provided by Green and Western (1996) and Green and Western (1997).Overall, perturbation parameters in our model span vibrational levels v ′ = 0-9 for the B state, and v ′ = 0-19 for the B ′′ state.As noted earlier, the difficulty in the analysis of the B v ′ = 10 level means that perturbation constants are unavailable.Considering those bands above the dissociation limit, Lewis et al. (2018) have built a coupled-channel model of the B − X transition to account for predissociation of the v ′ = 11-27 vibrational levels of the B state.Term energies in Lewis et al. (2018) are provided with respect to the F 2 , v = J = 0 level of the X 3 Σ − g state and are separated for each Ω component (i.e., Hunds case (c) coupling was assumed).As we discussed above, this level does not actually exist for the principle isotopologue due to nuclear spin statistics, and in our PGOPHER model, this virtual state would be at an energy of 8.2 cm −1 .To implement the term values of Lewis et al. (2018) into our PGOPHER model, a calibration is required that removes the artificially applied splitting and accounts for the energy of the F 2 , v = J = 0 level of the X 3 Σ − g state.This essentially adds λ + 8.2 cm −1 to each Ω = 0 term Table 3: Perturbation constants (in cm −1 ) that have been refit using line positions available from Olsson (1936), Ikenoue (1960), and Patiño and Barrow (1982).value of Lewis et al. (2018).Our calibrated term values are provided in Table 4, along with rotational constants and predissociation widths determined from Lewis et al. (2018).For some bands, we have slightly adjusted the rotational constants for better agreement at higher temperatures, as indicated in Table 4. Lewis et al. (2018) also recommend γ = 0.02 cm −1 for all vibrational levels of the B state and this value is also included in our model for the v ′ = 11-27 vibrational levels of the B state. Refitting rotational constants for emission bands Green and Western (1996) provided a deperturbed analysis of the B − X transition of S 2 with v ′ = 0-6, v ′′ = 0-7, and the inclusion of perturbations from v ′ = 2-12 of the B ′′ state .Their work used a combination of line positions observed from laser-induced fluorescence spectra (primarily from lower J levels) and previously recorded high-temperature static cell measurements (including levels with J up to 100).In total, 3320 observed lines went into the fitting of their initial S 2 model. In total, 1378 lines were used for a standard deviation of 0.130 cm −1 .The constants for levels of the B − X transition that were fit for this work are provided in Tables 1, 2, and 3. Determining band strengths for predissociated region Theoretical oscillator strengths (f v ′ v ′′ ) are reported in the literature for many bands of 32 S 2 (Pradhan and Partridge, 1996;Smith and Liszt, 1971).However, these works do not cover all of the B − X predissociated bands measured by Stark et al. (2018) and often do not include hot bands.The Einstein-A coefficients reported by Xing et al. (2020) have been converted to oscillator strengths using the formulae in Bernath (2016).In addition, some works (Anderson et al., 1979;da Silva and Ballester, 2019;Xing et al., 2020) report the Frank-Condon (FC) factors (q v ′ v ′′ ), but these have not been implemented in this work. Figure 3: Oscillator strengths (f v ′ ,0 ) factors for the bands of the B − X transition of 32 S 2 , plotted against v ′ with v ′′ = 0 (Smith and Liszt, 1971;Stark et al., 2018;Pradhan and Partridge, 1996;Xing et al., 2020).Also included are oscillator strengths from this work that were obtained by fitting to experimental absorption cross sections at 370 and 823 K from Stark et al. (2018). Fig. 3 provides a comparison of the oscillator strengths considered in this work for B − X bands with v ′′ = 0. Generally, there is a good agreement between Stark et al. (2018) and Xing et al. (2020) for fundamental bands with v ′ < 10.However, since the the oscillator strengths of Pradhan and Partridge (1996) and Smith and Liszt (1971) do not align well with those of Stark et al. (2018), it was necessary to fit the oscillator strengths using PGOPHER (Western, 2017) for the v ′ ≥ 10 fundamental bands using the calibrated cross-sections at 370 and 823 K (Stark et al., 2018). PGOPHER requires the square root of the band strength (i.e, √ S 1 ) in order to scale the strength of the corresponding transition and individual line intensities.These can be converted to oscillator strengths using the formulae in Bernath (2016).The resultant oscillator strengths determined from the fit are shown in Fig. 3 and provided in Table 5.The fitted values appear consistent with those from Stark et al. (2018) and Xing et al. (2020) for the v ′ < 10 bands and also are in qualitative agreement when compared to the oscillator strengths for the predissociated region presented in Figure 6 of Stark et al. (2018).Our fit also indicates that the band with the strongest overlap (i.e., maximum oscillator strength) appears at v ′ = 12, which is a slightly lower vibrational level than that of Pradhan and Partridge (1996) and Smith and Liszt (1971), but consistent with the vibrational level reported by Stark et al. (2018). Hot bands with v ′′ = 1 are also prominent in the Stark et al. (2018) spectra, particularly at 823 K, therefore it was also necessary to fit the oscillator strengths for v ′ ≥ 13 for these bands.The fitted oscillator strengths for the hot bands are also given in Table 5. For B − X bands with v ′′ = 0 below the dissociation limit (v ′ =0-9), the oscillator strengths calculated from Xing et al. ( 2020) have been used, given the consistency with Stark et al. (2018) for the v ′′ = 0 bands.Oscillator Table 5: Oscillator strengths (f v ′ ,v ′′ ) used in this work for 32 S 2 .These include those obtained from fits to experimental cross-sections at 370 and 823 K (Stark et al., 2018), and when indicated, those taken from the literature.strengths calculated from Xing et al. (2020) were also used for the v ′′ = 1 and v ′′ = 2 hot bands up to v ′ =10.The oscillator strengths for the v ′ = 11-12 bands have been estimated due to the lack of consistency between literature values.Oscillator strengths for the v ′′ = 2 hot bands with v ′ =11-20 are provided by Pradhan and Partridge (1996), with an estimated strength for the v ′ =21-27 bands.A summary of the oscillator strengths used in this work for the B − X bands with v ′′ = 0-2 is provided in Table 5.For B − X bands with v ′ = 0-3 and v ′′ = 3-10, oscillator strengths calculated from Xing et al. (2020) have been used.The line list generated for 32 S 2 from PGOPHER has been converted into the standard format used by the HITRAN database (Gordon et al., 2022).This format is used as input to numerous radiative transfer codes for terrestrial and exoplanetary applications, and is expected to be included into the HITRAN database (for this work, 58 is used as a provisional molecule ID number).Line positions (in cm −1 ), intensities (cm/molecule at 296 K), Einstein-A values (s −1 ), lower-state energies (cm −1 ), and transition assignments have been converted directly from PGOPHER.It is necessary to include pressure broadening parameters for each line so that spectra can be calculated from the line list.For this work, air-and self-broadening Voigt parameters have been estimated as 0.05 cm −1 /atm for both.In addition, the temperature dependence of the broadening has been estimated as 0.71.These values have been approximated based on comparisons to similar parameters in HITRAN for O 2 .The line intensities in HITRAN formalism are scaled due to the "natural" terrestrial abundance of atomic species taken from De Biévre et al. (1984).Therefore, the intensities in PGOPHER for 100% 32 S 2 are scaled by 0.9028 in the HITRAN-formatted line list. Figure 4: An overview of the S 2 line list calculated for this work.Vibronic bands of the B − X transition (with v ′ =0-27, v ′′ =0-10) are indicated.The line positions (cm −1 ) have been plotted against intensity (cm/molecule at 296 K) scaled by their natural abundance for 32 S 2 . An overview of the S 2 line list has been provided in Fig. 4. The line list includes B − X bands up to v ′ = 27 and spans the 21 700−41 300 cm −1 (∼242−461 nm) spectral range.The vertical axis shows intensity in HITRAN units and formalism (at 296 K), which assumes local thermal equilibrium.That is why the bands with high v ′′ values appear so weak, as these levels will have a very small population at 296 K.However, as shown in Figure 2, one should expect strong emissions down to these levels from photochemically excited lower levels of the excited electronic state. In addition to the line list, a partition sum, Q, is also required to recalculate line intensities at different temperatures.For this work, the total internal partition sums (Q sum ) for 32 S 2 has been exported from the PGOPHER model, which employs direct summation of the energy levels.For consistency with other molecules in HITRAN, the Q sum values have been placed on the same temperature grid used in TIPS-2021 (Gamache et al., 2021).In addition, the lower state energies have been adjusted to the energy of the lowest occupied level (i.e., +15.120664 cm −1 has been added to all energies) to be consistent with HITRAN format, and the partition sum exported from PGO-PHER.The S 2 HITRAN metadata and partition sum can be seamlessly implemented into the HITRAN Application Programming Interface, HAPI (Kochanov et al., 2016), to enable the calculation of cross sections for this work.The HITRAN-formatted S 2 line list is provided as a supplementary file, along with the partition sum to allow calculation with HAPI. As noted earlier, the S 2 diffuse B − X bands in the UV require the inclusion of predissociated line widths in order to generate reliable cross-sections.Lewis et al. (2018) provides separate calculated predissociation widths for each Ω for the v ′ ≥10 B − X bands.These have been averaged for each vibrational level and are provided as half width at half maximum (HWHM) values in Table 4.A Python code has been generated to be used alongside HAPI in order to calculate absorption cross-sections with the inclusion of predissociated line widths.This code is provided as a Supplementary file and is expected to be incorporated into future versions of HAPI.It is anticipated that it will also be of benefit to the calculation of predissociation for other molecules in HITRAN, in particular for the Schumann-Runge bands of O 2 . Results The line list generated in this work has been validated by comparing calculated spectra to the measurements of Stark et al. (2018) at 370 and 823 K, as was done by Lewis et al. (2018).All spectra in this work have been calculated using HAPI with the inclusion of the presdissociation widths.The upper panel of Fig. 5 shows the S 2 line list calculated at 370 K and plotted against the calibrated cross-section from Stark et al. (2018).The lower panel of Fig. 5 shows a zoomed-in region of the upper panel around the v ′ = 16 band of the B − X transition with the residual (Obs.-Calc.)also shown.The B − X bands above the dissociation limit exhibit a similar structure, 4. but the size of the predissociation widths can obscure much of the detail.Spin-spin splitting leads to a separation of each Ω component and, depending on the magnitude of the separation and predissociation widths, the Ω = 0 branches can be seen as a shoulder to the bandhead as shown for the v ′ = 16 band near 37 900 cm −1 .In addition, the predissociation widths for this level are small enough to resolve partial rotational structure, which agrees very well with the measurements of Stark et al. (2018).Some of the calculated intensity in this band is due to the underlying hot bands. For this work, hot bands to the v ′′ = 1 and v ′′ = 2 of the X state have been included.These bands become more dominant at higher temperatures, and Figure 6 shows comparisons to the 823 K cross section of Stark et al. (2018).The oscillator strengths for our model have been determined by comparing the intensities at both 370 and 823 K.At this higher temperature, good agreement is also seen with the spectrum calculated using the line list from this work. Discussion The primary challenge in this work was to combine the constants and parameters of previous works into a consistent model that can be used for calculating the spectrum of S 2 in the UV.Since the spectral range of the B − X line list spans the dissociation limit of S 2 , there is a different classification of the accuracy of individual spectroscopic parameters above and below this limit.Below the dissociation limit for bands with v ′ ≤ 9, the perturbation model substantially improves the accuracy of the line positions, as demonstrated by the refitting of the rotational constants.However, for bands above the dissociation limit with v ′ ≥ 11, the position accuracy is difficult to estimate as the lines are broadened due to predissociation, and perturbations for these bands are not included.It should be expected that these bands are consistent with those of Lewis et al. (2018), and have a conservatively estimated position accuracy of ∼10 cm −1 . For bands involving levels below predissociation, a standard deviation of all fitted lines is 0.130 cm −1 , which is much higher than above the predissociaion.Figure 7 includes a comparison of the line list and calculated cross section from this work (at 300 K) to the measurements of Green and Western (1996) in the region of the B −X (5,0) transition.While the intensity of the experiment is not equivalent to the absorption cross section, the agreement of the line positions is excellent and demonstrates the accuracy of these bands.Strong perturbations of the B − X (5,0) band with the B ′′ − X (10,0) have a significant impact on the line positions.Moreover, the Ω splitting also impacts the line positions and significantly alters the location of each band head.Including these effects is essential for reproducing experimental observations as demonstrated in Figure 7 for the comparison to the absorption cross section from ab initio work of Sarka and Nanbu (2023), which does not account for spin-splitting or perturbations.Furthermore, the predissociation widths included in Sarka and Nanbu (2023) does not agree with the predissociation broadening observed in the experiments of Stark et al. (2018). It should be highlighted that only a limited number of line positions could be used when refitting the constants.Rotational constants have primarily been provided by previous works, which allows for a model to be constructed that works reasonably well for unperturbed levels.However, for a molecule like S 2 where large interactions substantially perturb the levels and observed line positions, it is necessary to include as many observed transitions as possible to refine the crossing points for the B ′′ − B interaction.It was also noted that our fit for B − X (v ′ , v ′′ ) = (2-2), (2-3), (2-4) bands to line positions from Olsson (1936) showed the largest deviations, with up to ∼1 cm −1 residuals, while other bands observed by Olsson (1936) performed well.Therefore we attribute this deviation to the rotational levels of the B v ′ =2 state, which would need further experimental measurements to validate the rotational constants and perturbation parameters.One of the major uncertainties of our model is the accuracy of the B v ′ =10 level which only has a partially resolved rotational structure in the experimental spectra.We used constants provided in Green and Western (1997), but it should be stressed that the values are only estimates and do not contain any of the perturbation considerations of lower bands.Therefore, it should be expected that the line position accuracy of the model for the B v ′ =10 level is closer to that of the predissociated bands. The intensities in this work have been calculated from oscillator strengths, with some determined from fits to the experimental measurements of Stark et al. (2018).Those oscillator strengths obtained from fitting have assumed that the measured spectra are primarily a consequence of the B − X transition of 32 S 2 .However, it would be expected that approximately 8% of the absorption will be caused by the second-most abundant isotopologue, 32 S 34 S.There is only limited spectroscopic information in the literature for the 32 S 34 S isotopologue (e.g., Green and Western, 1997), and it was not sufficient to build a comprehensive line list of sufficient accuracy for this work.Lewis et al. (2018) included 32 S 34 S as part of their model with a band structure consistent with the residuals seen in Figs. 5 and 6.We can, therefore attribute the majority of these residuals to the 32 S 34 S isotopologue and note that our 32 S 2 intensities will have an increased uncertainty (∼10%) due to the absence of 32 S 34 S in our model.In addition, the oscillator strengths used in this work for the hot bands for the predissociated region are limited due to the lack of coverage in the literature, and a higher uncertainty can be expected.Stark et al. (2018) and Lewis et al. (2018) noted an apparent continuum in the experimental spectra with an approximate intensity of ∼3×10 −18 cm 2 , which has not been included in our model and contributes to the residual (noticeable at higher wavenumbers).Moreover, there are resolved transitions in the Stark et al. (2018) spectra near 40 000 cm −1 and 40 800 cm −1 at 370 K that are due to the f 1 ∆ u − a 1 ∆ g transition, which are not included in our line list. Further experimental measurements would be needed to determine an accurate line list for additional electronic bands of 32 S 2 and the 32 S 34 S isotopologue.The recent ab initio line list of Sarka and Nanbu (2023) accounts for isotopologues and continuum features, however the line list is insufficient at modeling the spectrum of S 2 at high resolution and is unable to account for the large broadening effect caused by predissociation. The B ′′ − X transition is included in our line list since the two B and B ′′ states are heavily mixed (Green and Western, 1997).In particular, the mixing between the B and B ′′ states occurs strongly where the B v ′ =7-9 bands are located.The B ′′ − X transition is applied as a perturbation in the PGOPHER file, so no oscillator strengths are applied, and the intensities are borrowed from the B − X transition.This gives rise to B − X and B ′′ − X bands with high vibrational levels for both v ′ and v ′′ .Given that these states will be expected to have small populations, we have restricted the HITRAN-formatted line list for the B − X tradition to v ′ ≤ 3 and v ′′ ≤ 10 below the dissociation limit and to v ′ ≤ 27 and v ′′ ≤ 2 above the dissociation limit (see Fig. 4).The excluded bands can be recalculated using the PGOPHER file in the Supplementary Material. There are no reported partition functions available for comparison in the literature.From a qualitative comparison with Figure 5 of van der Heijden and van der Mullen (2001) one can tell that the partition sum from our work, is multiple times larger at 1000 K, due to the necessary inclusion of spin-splitting in our model.On a related note, since our model includes energy levels up to J max = 150 and v max = 10 for the ground state, we advise caution at temperatures beyond 1500 K as we expect the partition sum to start to deviate from a complete partition function as temperature increases.The partition sum calculated for this work is included in the Supplementary Material. Conclusions A HITRAN-formatted line list for S 2 that covers the UV spectral range has been calculated from spectroscopic constants available in the literature (Green and Western, 1996;Green and Western, 1997;Lewis et al., 2018) using the PGOPHER program (Western, 2017) and a fit to line positions of emission bands with a standard deviation of 0.130 cm −1 .The line list includes the prominent B 3 Σ − u − X 3 Σ − g electronic transition of 32 S 2 with bands v ′ =0-27 and v ′′ =0-10.The perturbing electronic transition B ′′ 3 Π u − X 3 Σ − g is also included, with v ′ =0-19, v ′′ =0-10.The line list is provided as Supplementary Material in the commonly-used HITRAN format and will also be freely available online at https://hitran.org/. Line intensities for predissociated bands have been obtained by fitting to the experimental observations of Stark et al. (2018).The line list has been validated through comparisons to existing experimental cross-section spectra for the predissociated region.The predissociation of S 2 for v ′ ≥10 in the B − X transition requires the inclusion of predissociation line widths.Therefore, a Python program has been developed to be used in conjunction with HAPI that can apply the necessary predissociation line widths when using the HITRAN-formatted line list provided in the Supplementary Material.Currently, HAPI does not have the functionality to include predissociation line widths.However, it is planned that the program will be incorporated into future versions. Our S 2 line list can be used for exoplanet and planetary atmospheric investigations and photochemical models.The inclusion of our S 2 line list in photochemical models is expected to improve atmospheric interpretations of planetary spectra that have previously been based on estimates.In particular, interpretations of the JWST spectra of WASP-39b expect S 2 to be a key molecule in the formation of SO 2 . It should be noted that in May 2023, the Leiden database Heays et al. (2017) updated their S 2 cross-sections to include a preliminary version of the data reported here (see Hrodmarsson and Van Dishoeck, 2023), but temperature independence still remains.The S 2 line list reported in this work provides greater flexibility for temperature coverage. In addition to new experiments (especially for minor isotopologues), in future works, ab initio calculations (after validation) can help improve line lists of the sulfur dimer.The work of Sarka and Nanbu (2023) is an important step in that direction.In principle, ab initio methods can be used to calculate line positions and intensities, but may still be deficient for perturbations.They can also provide information on the predissociation widths and continuum contribution (see review by Tennyson et al., 2023). In summary, we provide a publicly available S 2 line list and associated calculation tools, which can be used to simulate the emission and absorption spectrum of 32 S 2 over the 21 700−41 300 cm −1 (∼242−461 nm) spectral range. Figure 1 : Figure 1: A schematic showing the possible rovibronic transitions for the B 3 Σ − u − X 3 Σ − g transition of S 2 with N ′′ = 5.The ∆N ∆J F ′ F ′′ (J ′′ ) and ∆N (N ′′ )∆J(J ′′ ) notations are both shown for each transition, with the transition color relating to ∆N .Even N levels of the X 3 Σ − g state and odd N levels of the B 3 Σ − u (greyed and dashed lines) are not populated.Note that the depicted relative positions of spin components of different rotational levels, are qualitative and do not represent the actual relative energies, especially in the perturbed upper electronic state. Figure 2 : Figure2: S 2 potential energy curves for the X, B and B ′′ states.The vibrational levels of the X and B states used in this work have been included, along with the dissociation limit.Shaded areas demonstrate the strongest transitions according to the Franck-Condon principle for absorption (shaded blue) and emission (shaded red) between the X and B states.The potential energy curves have been plotted using data fromXing et al. (2020).The dissociation limit of 35 636.9 cm −1(Sun et al., 2019) is indicated, demonstrating it proximity to the v ′ = 10 level of the B state. S 2 line list for HITRAN Figure 5 : Figure 5: Experimental S 2 absorption cross section at 370 K from Stark et al. (2018) compared to a calculated spectrum (blue) that uses the line list from this study.The upper panel shows an overview of the predissociated region between 35 400-41 500 cm −1 with even v ′ indicated for the B − X (v ′ ,0) bands.The lower panel shows a zoomed-in region around the B − X (16,0) band as it has the smallest predissociated width, with a residual (Obs.-Calc.)shown in orange.Predissociated bands with v ′ ≥10 have been calculated using the Lorentz HWHM values given in Table4. Figure 6 : Figure 6: Experimental S 2 absorption cross section at 823 K from Stark et al. (2018) compared to a calculated spectrum (red) using line list from this work.The upper panel shows an overview of predissociated region between 35 400-41 500 cm −1 , with even v ′ indicated for the B − X (v ′ ,0) bands.Hot bands have not been indicated, but are prominent in between the (v ′ ,0) sequence.The lower panel shows a zoomed-in region around the (16,0) band.This band has the smallest predissociated width, which partially reveals the rotational structure.The residual (Obs.-Calc.) is shown in green. Figure 7 : Figure 7: Comparison in the region of the B − X (5,0) band of S 2 .(a) Laser induced fluorescence spectrum of S 2 with rotational temperature of ∼300 K (reprinted from Fig. 6 of Green and Western (1996), with the permission of AIP Publishing).(b) Experimental absorption cross section at 300 K calculated using the line list from this work and convolved with a 0.1 cm −1 Gaussian lineshape.(c) Line positions and intensities from the HITRAN-formatted line list of this work showing the B − X and weaker B ′′ − X lines.(d) Experimental absorption cross section at 300 K from the Supplementary Material of Sarka and Nanbu (2023) convolved with a 0.1 cm −1 Gaussian lineshape. Table 4 : Spectroscopic parameters used in the PGOPHER model for v ′ = 11-27 vibrational levels of the B 3 Σ − u state.All values are provided in cm −1 and those values that have been adjusted from Lewis et al. (2018) have been indicated. Lewis et al. (2018)been calibrated from the separated Ω components inLewis et al. (2018).See text for details.b Calculated values fromLewis et al. (2018), unless otherwise stated.cBasedon the average calculated values inLewis et al. (2018).dConstant has been refit for this work.
2024-01-22T06:44:40.173Z
2024-01-19T00:00:00.000
{ "year": 2024, "sha1": "d76fbae379fd7a35bcba1c59e98e9d9e6168b8a4", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/mnras/advance-article-pdf/doi/10.1093/mnras/stae246/56346798/stae246.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "d76fbae379fd7a35bcba1c59e98e9d9e6168b8a4", "s2fieldsofstudy": [ "Physics", "Chemistry" ], "extfieldsofstudy": [ "Physics" ] }
261905442
pes2o/s2orc
v3-fos-license
Duodenal polyposis, a rare manifestation of gastrointestinal portal hypertension Abstract Portal hypertension can affect the entire gastrointestinal tract, including the duodenum. Portal hypertensive duodenopathy may occur relatively rare in patients with portal hypertension secondary to cirrhosis or extrahepatic portal venous obstruction. We report the case of a 63-years-old female patient with cirrhosis who underwent an esophagogastroduodenoscopy. Multiple small duodenal polyps (2-3 mm) were found. The histopathologic examination of the duodenal biopsy specimen revealed a polypoid duodenal mucosa, with preserved villous architecture, with focal gastric foveolar metaplasia and numerous ectatic capillaries in lamina propria. The polypoid lesions found in the duodenum are a consequence of portal hypertension. The presence of one or several polyps in the duodenum of a patient with portal hypertension, with specific histological findings (dilated mucosal capillaries, no dysplasia) is diagnostic of duodenal polyp/polyposis in the context of portal hypertension. Introduction Portal hypertensive duodenopathy (PHD) is one of the manifestations of portal hypertensive syndrome [1], causing duodenal varices, mucosal friability, erosions, ulceration, vascular ectasia, mosaic pattern and duodenal portal hypertensive polyp/polyposis, a rare finding that was recently described. Case report We report a case of a 63-year-old female patient with cirrhosis due to hepatitis virus C (HVC) infection, who presented for an esophagogastroduodenoscopy.Her medical history includes sigmoid adenocarcinoma (status post proctosigmoidectomy 7 years), adjuvant chemotherapy and blood loss through colostomy (HGB 5 g/dL).The patient was diagnosed with cirrhosis in 2008.Abdominal ultrasound detected minimal ascitic fluid and hepatosplenomegaly.She was recently diagnosed with hepatic encephalopathy.Upper endoscopy revealed: grade I-II esophageal varices, severe portal hypertensive gastropathy, small, multiple duodenal polyps -2-3 mm, located in the duodenal bulb and second part of the duodenum (Figure 1).Two biopsies were taken. Microscopically, the polypoid duodenal mucosa presented preserved villous architecture, with some enlarged villi (Figure 2), foveolar metaplasia of the surface epithelium and numerous ectatic and congested capillaries in the lamina propria (Figure 3).There was no inflammation or epithelial dysplasia. Immunohistochemistry tests were performed: CD34 and D2-40 were positive in the endothelium of proliferating blood vessels and lymphatics in the lamina propria (Figure 4).Ki 67 proliferation index was <1% (Figure 5). Based on the clinical information, correlated with histopathological and immunohistochemical findings, a diagnosis of duodenal portal hypertensive polyp/polyposis was established. Discussions The abnormal blood pressure in the portal venous system is due to chronic end-stage liver disease or to extrahepatic portal venous obstruction.Gastrointestinal manifestation of portal hypertension include esophageal varices, gastric varices in cardia and fundus, portal hypertensive gastropathy in body and fundus, GAVE (gastric antral vascular ectasia), portal hypertensive enteropathy, portal hypertensive colopathy [2]. Duodenal polyps are a rare manifestation of PHD and have been described in reports [6][7][8][9][10] and recent studies [11,12].Most cases presented as multiple polyps, ranging in size, from 1-2 mm [10] to 3 cm [6], located in first and second part of the duodenum, some of them being responsible for gastrointestinal bleeding.There was no gender predilection, age of patients ranging from 1 to 73 years old.Histological findings of the duodenal polyps described in the literature included: vascular ectasia/congestion/thrombi, gastric foveolar metaplasia, reactive nuclear atypia, fibrosis and smooth muscle proliferation. Considering the patient's history of colonic adenocarcinoma, there was the assumption that the duodenal polyps found were adenomatous.Fortunately for the patient, no dysplasia was identified.Other differential diagnosis of duodenal portal hypertensive polyps includes: duodenal pancreatic or gastric heterotopia, duodenal hamartomatous polyps, inflammatory bowel disease associated inflammatory polyps and other lesions described in Table 1. Conclusions Duodenal polyps are a rare manifestation of portal hypertension and an accurate diagnosis should be made, based on the clinical context and typical microscopic findings: numerous ectatic mucosal capillaries and no dysplasia.Clinicians should keep in mind this type of non-neoplastic polyps when assessing patients with history of digestive adenocarcinoma and portal hypertension. Consent Written informed consent was obtained from the patient for publication of this case report. Competing interests The authors declare that they have no competing interests.
2020-01-30T09:14:32.737Z
2019-12-01T00:00:00.000
{ "year": 2021, "sha1": "c9eccc4f84fb83e425f9c24bc1903f6bf518c255", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "c9eccc4f84fb83e425f9c24bc1903f6bf518c255", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
25341364
pes2o/s2orc
v3-fos-license
Nephropathy by Oxalate Deposits: Not Only a Tubular Dysfunction Nephropathy by Oxalate Deposits: Not Only a Tubular Dysfunction Muraro E1, Gianesello L1, Priante G1, Comacchio A1, Carraro G2, Naso A2, Anglani F1, Valente M3 and Del Prete D1* 1Department of Medicine, Nephrology Unit University of Padua, Italy 2Nephrology and Dialysis Unit Padua Hospital, Italy 3Department of Medical-Diagnostic Sciences and Special Therapies, University of Padua Medical School, Padua, Italy Introduction Hyperoxaluria may be either inherited or acquired. Primary Hyperoxaluria (PH) is a rare autosomal recessive disease characterized by increased endogenous oxalate production and accumulation in renal and extrarenal tissues. PH has three defined subtype. Type 1 (PH1), the most common type of PH, is caused by a deficiency of the liver enzyme Alanine/Glyoxylate Aminotransferase (AGT) which results in metabolic overproduction of oxalate and glycolate. In PH type 2 (PH2), deficiency of Glyoxylate Reductase/Hydroxypyruvate dehydrogenase (GRHPR) results in metabolic overproduction of oxalate and glycerate. Primary Hyperoxaluria of type 3 (PH3) is caused by mutations in the HOGA1 gene that reduce function of the mitochondrial enzyme 4-Hydroxy-2-Oxoglutarate Aldolase (HOGA1) [1]. Secondary hyperoxaluria are usually a consequence of gastrointestinal diseases frequently associated with fat malabsorption: inflammatory bowel diseases, cystic fibrosis, bariatric surgery or short bowel syndrome [2] or increased dietary intake of oxalate [3]. Oxalate does not seem toxic to hepatocytes, but since it cannot be metabolized in mammals, it can only be filtered at glomerular level and also secreted by renal tubules with urinary excretion levels >0.5 mmol/1.73 m 2 per day in PH patients [4]. Owing to the high urinary oxalate excretion, the urine become supersaturated for calcium oxalate (CaOx) resulting in crystals formation within the tubular lumen [5]. In all forms, the oxalate excess is excreted in the urine and frequently patients with PH present signs or symptoms related to kidney stones and progressive nephrocalcinosis. Renal dysfunction ensues with accumulation of oxalate excess within the parenchyma that induces interstitial inflammation and fibrosis that result in progressive loss of renal function [6]. Here we present a case of a young man with an unexplained progressive renal failure. Case Report A 19 year old man went to the emergency because of an asthma attack. Blood analyses revealed renal failure and for this reason he was admitted to our Unit. On admission, the patient showed a blood pressure of 130/70 mmHg, no other particular physical findings were seen. Laboratory data were as follows: serum creatinine 1.98 mg/dl, Blood Urea Nitrogen (BUN) 11.20 mmol/l, Glomerular Filtration Rate (GFR) 42 ml/min/1.73 m, WBC 8.99 × 10.9/L, RBC 4.88 × 10.12/L, hemoglobin 14.7 g/dL, platelet count 182 × 10.9/L, Na 141 mmol/L, K 4.0 mmol/L, Cl 108 mmol/L, Ca 2.32 mmol/L, proteinuria was absent, urinary sediment showed microhematuria, and rare crystals of calcium oxalate. Immunological screening for glomerulonephritis (complement, immunoglobulins and auto antibodies) and other blood data were mostly within the normal range. Familial and medical history was essentially negative. Renal ultrasound evidenced hyperechoic kidneys of normal size, and with reduced cortico medullary differentiation. Since the patient did not show the presence of kidney stones or nephrocalcinosis, but the laboratory data revealed a renal failure without other specific clinical features, a renal biopsy was mandatory. Moreover, the patient was in good clinical condition on admission to hospital. Light microscopy on paraffin embedded tissue showed six glomeruli with mild mesangial expansion, normal thickness of capillary basement membranes. In one glomerulus it was possible to observe more mesangial expansion with disappearance of Bowman's space and the presence of birefringent crystals of calcium oxalate. Periglomerular infiltrate and fibrosis were noted ( Figure 1). In close proximity of this glomerulus a larger calcium oxalate crystal was revealed at tubular level ( Figure 1); interstitial fibrosis associated to mononuclear infiltrate was also observed ( Figure 2). Immunofluorescence was negative. The histopathological pictures were suggestive for glomerular and tubulointerstitial nephropathy by oxalate deposits. It was subsequently assayed the dosage of urinary organic acids that evidenced an hight excretion of oxalic, glycolic acids and trace of glycerol. To confirm the diagnosis of hyperoxaluria type 1 molecular analysis of AGXT gene was made, after obtaining written informed consent, and revealed the presence of c.731T>C mutation at exon 7 in homozygosis which determines the replacement of isoleucine 244 with a threonine. This mutation is unlikely to cause complete lack of protein production [7]. The patient was discharged with a diagnosis of PH Type 1 and treated with water care, sodium citrate (6 g die) and pyridoxine (600 mg die) therapy. The patient was not compliant to the clinical follow-up suggested and it was difficult to monitor urinary oxalate levels changes. After 7 months from the diagnosis, renal ultrasound revealed for the first time the presence of spots with shadow back at pyramids level. This finding was confirmed by CT (Computed Tomography) that evidenced soft rib hyperdense to the cortico-medullary passage.Within two years, the patient was in End Stage Renal Disease (ESRD) and he was on dialysis three times a week. Discussion Kidney damage in PH has already been described [4]. CaOx salts are poorly soluble in body fluids and calcium oxalate deposits are observed within renal tissue as nephrocalcinosis or nephrolithiasis. This leads to progressive renal injury, inflammation and tubular obstruction resulting in interstitial fibrosis, kidney failure and ESRD. CaOx crystals interact with the renal tubule epithelium and are deposited into the renal interstitium where they induce strong inflammatory response, sometimes with the formation of "foreign-body" type granulomas, and progressive interstitial fibrosis. Recently Cartery et al., [8] described in kidney biopsies of patients with acute oxalate nephropathy, the presence of glomerulosclerosis in addition to known tubular damages. The casereport presented is peculiar either for the atipycal onset of the disease (nor nephrolithiasis nor nephrocalcinosis but renal failure) and for the evidence in kidney biopsy of crystals at glomerular level ( Figure 1). To our knowledge there are not in literature other demonstrations of CaOx deposition in glomeruli. When glomerular filtration rate (GFR) decrease (30-40 ml/min) renal capacity to excrete calcium oxalate is significantly impaired, and is possible to observe deposition in extra renal tissues (systemic oxalosis). Oxalosis may involve different organs as myocardium, bones and bone marrow. Moreover, renal histopathology in patients with crystal deposits due to secondary hyperoxaluria showed tubular atrophy and interstitial fibrosis ranged from mild to moderate and did not track closely with glomerular sclerosis [9]. Progressive renal parenchyma inflammation and interstitial fibrosis due to nephrocalcinosis and recurrent urolithiasis cause renal impairment, which usually progress to ESRD. Renal failure secondary to crystal nephropathy has generally attributed to intratubular obstruction but recently some Authors [10] have demonstrated that Nalp3-null mice (nucleotide-binding domain, leucine-rich repeat inflammasome) are completely protected from progressive renal impairment and mortality due to oxalate nephropathy as compared with wild type mice, emphasizing the role of the inflammasome. Upon activation NALP3 proteins recruit the protease caspase-1 that cleaves the biologically inactive precursors of IL-1β and IL-18 to generate their mature inflammatory counterparts. There are evidence that hyperoxaluria is able to induce apoptotic changes in renal tubular epithelial cells involving TNF and FAS patways [11]. Since 1994 it has been known that renal cells exposure to calcium oxalate crystals results in activation of many different patways, gene expression changes and initiation of DNA synthesis in epithelial cells [12]. This mechanism likely occurs in the tubulointerstitial compartment, but what happen at glomerular level? In our case the first sign of oxalosis was kidney failure without evidence of nephrocalcinosis or urolithiasis. Although stone formation is often observed in the urinary tract, the detection of crystal deposition in renal biopsy specimens is relatively rare, because renal biopsy might be avoided in case of urinary tract stones. In our patient renal biopsy examination was performed to clarify renal dysfunction. Intriguing was the discovery of crystals in a glomerulus, but it is difficult to determine which are the pathogenic mechanisms underlying this event. Oxalate is primarily eliminated via renal glomerular filtration and in vitro studies have demonstrated that once adherent to cells, calcium oxalate crystals initiate a cascade of reaction that include crystal internalization, changes in gene expression, cytoskeletal reorganization and cell proliferation [1]. The presence in our biopsy of birefringent crystals of calcium oxalate in glomeruli and the discovery of inflammatory cells both at interstitial and periglomerular level suggest us that both mechanical and inflammatory process could be responsible of ESRD. Were there special conditions in our patient which favored crystals aggregation in glomeruli? May the mutation found have played a role in this histopathological aspect? It is difficult to demonstrate a genotypephenotype correlation in PH1 because patients with the same genotype can have different course of the disease. The rarity of the disease and the large number of mutations make it difficult to identify possible genotypephenotype correlations and factors linked with outcome. The informations reported in the literature focus mainly on two aspects: the age of onset of the disease and the responsiveness to pyridoxine therapy. The presence of the p.Ile244Thr (c.731T>C) mutation seems to be related to a wide range of age at onset and of ESRD [7], in our patient the age of onset was on average and the evolution to the dialysis treatment was relatively fast (two years). The treatment options applied in this case was in accordance with the literature moreover, there was a low response to pyridoxine therapy. This data was in agreement with Fargue who reported in p.Ile244Thr a response to pyridoxine only in few patients [13]. Conclusions The case described here is peculiar for clinical presentation (renal failure without evidence of urolithiasis or nephrocalcinosis) and for glomerular histopathological aspect of oxalate deposition (Figure 1). To our knowledge, this is the first demonstration of CaOx deposition in a glomerulus to be reported in the literature. We consider important to divulge this rare PH1 phenotype for a better clinical and genetic understanding of the disease.
2019-03-13T13:31:59.558Z
2016-02-23T00:00:00.000
{ "year": 2016, "sha1": "fed4b96d122ed7d4f33e3a9122b380988dad6eb2", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/nephropathy-by-oxalate-deposits-not-only-a-tubular-dysfunction-2165-7920-1000713.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4ede2fb931b3bbda4a9bc44f099a70ef6c19833c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
271658091
pes2o/s2orc
v3-fos-license
A Literature Review and Meta-Analysis on the Potential Use of miR-150 as a Novel Biomarker in the Detection and Progression of Multiple Sclerosis Background: MicroRNA-150 (miR-150) plays a critical role in immune regulation and has been implicated in autoimmune diseases like Multiple Sclerosis (MS). This review aims to evaluate miR-150’s potential as a biomarker for MS, necessitating this review to consolidate current evidence and highlight miR-150’s utility in improving diagnostic accuracy and monitoring disease progression. Methods: A comprehensive literature search was conducted in databases like PubMed, Scopus, Google Scholar, SciSpace, MDPI and Web of Science, adhering to PRISMA guidelines. Studies focusing on miR-150 implications in MS were included. Data extraction was conducted, while quality assessment was done using the NOS and AMSTAR 2 tools. With the extracted data a statistical analyses conducted. Results: 10 eligible articles were included in review. Findings show that miR-150 levels were consistently deregulated in MS patients compared to healthy controls, correlating with disease severity and clinical parameters such as (EDSS) scores and disease activity. Additionally, miR-150 is implicated in the inflammatory pathogenesis of MS, affecting immune cell regulation and inflammatory pathways. Conclusions: MiR-150 is a promising biomarker for MS, showing significant potential for improving diagnostic accuracy and monitoring disease progression. Its consistent deregulation in MS patients and correlation with clinical parameters underscore its clinical utility. Further research should validate miR-150’s salivary presence and its possible usage as a novel biomarker and therapeutic potential in the development of MS. One miRNA of particular interest is miR-150 [12], which is highly expressed in lymph nodes, spleen, and the thymus [13].It regulates the maturation and differentiation of T and B cells by targeting c-Myb, a transcription factor critical for the development of these cells [14].In autoimmune diseases like Systemic Lupus Erythematosus (SLE) and Rheumatoid arthritis (RA), miR-150 has been found to play a significant role [15], as well as in the modulation of other immune-related pathways, further demonstrating its broad impact on immune regulation across various autoimmune conditions [16].miR-150 is associated with multiple sclerosis (MS) through its involvement in regulating the maturation and differentiation of immune cells [17], contributing to the inflammatory processes characteristic of the disease [18]. MS is a chronic autoimmune disease that affects the Central Nervous System (CNS) [19], leading to the progressive deterioration of neurological function [20].It is characterized by an autoimmune response wherein the immune system targets and degrades the myelin sheath that insulates nerve fibers, resulting in impaired signal transmission between the CNS and peripheral tissues.[21].Symptoms vary widely among patients but commonly include fatigue, difficulty walking, numbness or weakness in limbs, vision problems, and cognitive impairment [22]. MS predominantly affects young adults [23], with most diagnoses occurring between the ages of 20 and 40 [24].It is more common in women than in men, with a female-to-male ratio of approximately 2:1 [25].The prevalence of MS also varies geographically, being higher in regions further from the equator [26].This disease significantly impacts the quality of life and can lead to disability [27], although the course of MS can be highly variable, with some individuals experiencing periods of remission and others facing a steady progression of symptoms [28]. At the immunological level, MS is primarily driven by the dysregulation of CD4+ T cells [29], particularly the Th1 and Th17 subtypes, which become autoreactive and cross the blood-brain barrier (BBB) [30].Once in the CNS, these cells recognize myelin antigens as foreign, initiating an inflammatory cascade.This immune response involves the activation of other immune cells, including CD8+ T cells, B cells, and macrophages, which further amplify the inflammatory process [31].B cells contribute to the pathogenesis by producing autoantibodies against myelin components and presenting antigens to T cells, while macrophages and microglia release pro-inflammatory cytokines and reactive oxygen species that exacerbate tissue damage [32]. The chronic inflammation and immune-mediated damage lead to demyelination, axonal injury, and neurodegeneration.Demyelination disrupts the efficient transmission of electrical impulses along nerve fibers [33], resulting in the clinical manifestations of MS, such as motor and sensory deficits, vision problems, and cognitive impairments [34].In the early stages of the disease, remyelination can occur, leading to partial recovery of function.However, as the disease progresses, the ability of the CNS to repair myelin diminishes, leading to irreversible axonal loss and the progressive accumulation of neurological disability [35].The development of MS is thus a result of a multifactorial process involving genetic predisposition, environmental influences, and a complex immune response that ultimately leads to CNS damage and functional impairment. MicroRNA-150 (miR-150) has been identified as a key regulator in the differentiation and function of these T cells [29].miR-150 modulates the maturation and activation of both CD4+ [29,30] and CD8+ T [36] cells by targeting specific mRNA transcripts that encode proteins critical for T cell development and response [37].For instance, miR-150 targets the transcription factor c-Myb, which is essential for the proper development of T and B cells [30].By fine-tuning the expression of c-Myb, miR-150 influences the proliferation and differentiation of these immune cells, thereby impacting the overall immune response.By modulating c-Myb levels, miR-150 indirectly affects the production of cytokines such as IL-17, IFN-γ, and TNF-α [38]. CD4+ and CD8+ T cells play crucial roles in the immune system [37], with CD4+ T cells primarily functioning as helper cells that coordinate the immune response [39], while CD8+ T cells act as cytotoxic cells that directly kill infected or aberrant cells.MicroRNA-150 (miR-150) has been identified as a key regulator in the differentiation and function of these T cells [40].The process in wich the dysregulation of T cells is leading to demyelination is shown is Figure 1. CD4+ and CD8+ T cells play crucial roles in the immune system [37], with CD4+ T cells primarily functioning as helper cells that coordinate the immune response [39], while CD8+ T cells act as cytotoxic cells that directly kill infected or aberrant cells.MicroRNA-150 (miR-150) has been identified as a key regulator in the differentiation and function of these T cells [40].The process in wich the dysregulation of T cells is leading to demyelination is shown is Figure 1.MicroRNA-150 (miR-150) has emerged as a promising candidate biomarker for MS [41] due to its significant role in regulating immune responses and its differential expression in MS patients [37,42].In addition to its diagnostic potential, miR-150 could be valuable in predicting disease progression and treatment responses [43].Studies have shown that higher miR-150 levels in patients with clinically isolated syndrome (CIS) are associated with a higher likelihood of converting to MS, indicating its potential in early diagnosis [42][43][44].Moreover, miR-150 levels have been found to decrease in the CSF after treatment with disease-modifying drugs like natalizumab, while plasma levels of miR-150 increase, reflecting changes in immune cell dynamics.This responsiveness to treatment further underscores miR-150 [45]. The aim of this literature review is to systematically evaluate the presence and quantification of miR-150 in saliva and its potential to serve as a novel, non-invasive biomarker MicroRNA-150 (miR-150) has emerged as a promising candidate biomarker for MS [41] due to its significant role in regulating immune responses and its differential expression in MS patients [37,42].In addition to its diagnostic potential, miR-150 could be valuable in predicting disease progression and treatment responses [43].Studies have shown that higher miR-150 levels in patients with clinically isolated syndrome (CIS) are associated with a higher likelihood of converting to MS, indicating its potential in early diagnosis [42][43][44].Moreover, miR-150 levels have been found to decrease in the CSF after treatment with disease-modifying drugs like natalizumab, while plasma levels of miR-150 increase, reflecting changes in immune cell dynamics.This responsiveness to treatment further underscores miR-150 [45]. The aim of this literature review is to systematically evaluate the presence and quantification of miR-150 in saliva and its potential to serve as a novel, non-invasive biomarker for the early detection, diagnosis, and monitoring of MS.This review seeks to explore the biological mechanisms of miR-150 within the immune system, assess its correlation with Keywords Combinations "miR-150" OR "miRNA-150" AND "Multiple Sclerosis" "miR-150" OR "miRNA-150" AND "biomarker" "miR-150" AND "Multiple Sclerosis" AND "biomarker" "miRNA-150" AND "Multiple Sclerosis" AND "biomarker" "miR-150" OR "miRNA-150" AND "Multiple Sclerosis" AND "biomarker" Following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [47], the search process was systematic and transparent.A PRISMA flowchart was created to visually depict the process of selecting studies, documenting each step from identification to final inclusion.The flowchart is shown in Figure 2. [48].It illustrates the process of selecting studies for the analysis. Selection Criteria Inclusion Criteria: 1.Studies that focused specifically on the role of miR-150 as a biomarker for MS.This includes investigations into its diagnostic or prognostic potential and its molecular mechanisms in the context of MS. 2. Only peer-reviewed research articles, reviews, and meta-analyses were considered to ensure the reliability and validity of the data.[48].It illustrates the process of selecting studies for the analysis. Selection Criteria Inclusion Criteria: 1.Studies that focused specifically on the role of miR-150 as a biomarker for MS.This includes investigations into its diagnostic or prognostic potential and its molecular mechanisms in the context of MS. 2. Only peer-reviewed research articles, reviews, and meta-analyses were considered to ensure the reliability and validity of the data. 3. Studies that provide detailed information about the diagnostic or prognostic value of miR-150 in MS, offering clear insights into its clinical relevance. 4. Articles discussing the molecular mechanisms of miR-150 in MS were included to understand the biological pathways and processes involved. 5. The search was limited to English-language scholarly articles to maintain consistency in data interpretation and analysis.The sources used in this analysis encompassed primary research articles, observational studies (such as cross-sectional, case-control, or cohort studies), systematic reviews, and meta-analyses. 6.Only studies with accessible full-text versions were included to allow for comprehensive evaluation and analysis of the findings. Exclusion Criteria: 1. Studies that do not specifically address miR-150 in relation to MS were excluded to maintain the focus of the review. 2. Articles lacking sufficient data or proper experimental validation were excluded to ensure the reliability and accuracy of the findings. 3. Non-English publications, case reports, and editorials were excluded to maintain language consistency and focus on substantial research studies. 4. Studies investigating diseases other than MS without a direct connection to miR-150's role in MS were excluded to keep the review relevant and specific. 5. Studies conducted on non-human subjects were excluded to ensure the applicability of the findings to human MS research and potential clinical applications. 6. Studies that din mot included the miR-150 in the variety of miRNAs studied. Registration To promote transparency and facilitate open access to our research process and findings, the review has been registered with the Open Science Framework (OSF) [49] under the registration code osf.io/zmwgh [50].This registration ensures that all steps of our review, from the literature search to data extraction and analysis, are documented and accessible for verification and replication.By adhering to this standard, we aim to uphold the integrity of our systematic review and provide a reliable resource for future research on miR-150 as a biomarker in MS. Data Extraction Data extraction was conducted systematically to ensure the accurate and comprehensive collection of relevant information from the selected studies.The three reviewers, C.V.A., A.M.F., C.A.M.A, independently extracted data using a standardized form, which included fields such as study main author's name, year of publication, type of study, journal name, materials and methods used, methods for miR-150 detection, detailed findings, correlation of miR-150 levels with clinical parameters, authors' interpretation of the findings, other miRNAs studied, and population characteristics.This form was designed to capture essential details consistently across all studies. The process involved each reviewer independently extracting data from a subset of the included studies.Once the initial data extraction is completed, the results will be compared, and Cohen's kappa will be calculated [51].A high kappa value indicates strong agreement beyond chance, suggesting that the data extraction process is robust and the findings are reliable.In instances where disagreements arose, a fourth expert (C.R.F.) was consulted to provide an impartial opinion.This collaborative approach ensured that the data extraction process was thorough, unbiased, and adhered to the pre-defined criteria, ultimately enhancing the quality and integrity of the review. Quality Assessment To ensure the robustness and validity of the included studies, a rigorous quality assessment was conducted using standardized tools.The risk of bias for observational studies was assessed using the Newcastle-Ottawa Scale (NOS) [52], while systematic reviews and meta-analyses were evaluated with the AMSTAR 2 tool [53]. The NOS was employed to evaluate the quality of observational studies.This scale assesses studies based on three broad perspectives: selection of study groups, comparability of the groups, and ascertainment of either the exposure or outcome of interest.Each study was independently rated by the three reviewers on the following criteria: • Selection (0-4 points): Including the representativeness of the exposed cohort, selection of the non-exposed cohort, ascertainment of exposure, and demonstration that the outcome of interest was not present at the start of the study. • Outcome (0-3 points): Assessment of the outcome, including the method of ascertainment, length of follow-up, and adequacy of follow-up [52]. The AMSTAR 2 (A Measurement Tool to Assess Systematic Reviews) was used to evaluate the quality of systematic reviews and meta-analyses.This tool comprises 16 items that cover aspects such as the review protocol, comprehensive literature search, inclusion of grey literature, justification for excluding studies, risk of bias in individual studies, and appropriateness of meta-analytical methods.Each review was assessed based on: • Protocol Registration: Whether the review methods were established prior to conducting the review. • Comprehensive Literature Search: The extent and inclusivity of the literature search, including grey literature.• Justification for Excluding Studies: Explanation for study exclusions. • Risk of Bias: Assessment of bias in individual studies and its impact on the review's conclusions.• Appropriateness of Meta-analytical Methods: Suitability of the statistical methods used and assessment of publication bias [53]. Both tools facilitated a thorough evaluation of study quality, with discrepancies in assessment resolved through consensus discussions among the reviewers.We specifically aimed to include only studies with an NOS score of over 6 to ensure robust methodology and reliable results.Studies scoring low on these quality measures were scrutinized for potential biases and their impact on the review's overall conclusions.This comprehensive assessment ensured that only high-quality evidence was included in the final analysis. Data Synthesis and Analysis The extracted data were synthesized qualitatively to provide a comprehensive overview of the evidence.For studies with sufficient data, a meta-analysis was performed using a random-effects model to account for heterogeneity. Statistical Analysis All statistical analyses were conducted using STATA (Statistics and Data Science) version 16.0 (StataCorp LLC, College Station, TX, USA) [54].To assess heterogeneity across studies, the Q statistic and I 2 index were applied.The Q statistic determined if variations between studies were due to chance, while the I 2 index quantified the percentage of variation due to heterogeneity [55]. Diagnostic and prognostic values of miR-150 in MS were synthesized by calculating pooled sensitivity, specificity, positive likelihood ratios, and negative likelihood ratios using a bivariate random-effects model [56].Summary Receiver Operating Characteristic (SROC) curves were generated to illustrate overall diagnostic accuracy [57]. For meta-analyses, the DerSimonian and Laird random-effects model was used to provide a conservative estimate of the pooled effect size.Subgroup analyses and metaregressions explored potential sources of heterogeneity, such as study design differences, patient populations, or miR-150 measurement techniques [58].Publication bias was assessed using funnel plots and Egger's test, which evaluates funnel plot symmetry to detect small-study effects [59]. Databases Research Results In conducting a literature review on the potential usage of miR-150 as a novel biomarker in the detection and progression of MS, a comprehensive and systematic search was executed across multiple scientific databases, including PubMed, Scopus, Google Scholar, SciSpace, MDPI and Web of Science.Initially, this search yielded 197 studies.Following the removal of duplicates, the pool was reduced to 112 unique articles.A subsequent screening process, based on the titles and abstracts, further narrowed this selection to 25 articles for full-text evaluation.However, the full texts of only 22 articles were accessible, presenting a limitation in the scope of the review. Upon evaluating these 22 full-text articles, 13 were excluded as they did not primarily focus on miR-150, thus narrowing the selection to nine pertinent articles.A thorough assessment of the risk of bias was performed using the NOS, which led to the exclusion of three articles due to low scores (5 or below).Consequently, six articles were deemed eligible for inclusion in the review.These consisted of three review articles and three observational studies.The exclusion criteria were rigorously applied to ensure the inclusion of highquality studies that met the NOS standards, thus enhancing the reliability of the findings. The comprehensive process of article selection and evaluation is meticulously illustrated in Figure 2, which depicts the PRISMA flow chart.This diagram provides a visual representation of the systematic approach undertaken, including the initial identification of studies, removal of duplicates, screening based on titles and abstracts, full-text evaluations, and the final inclusion of studies that met the established criteria. The extracted data from the six eligible articles encompassed several critical aspects: the main author's name, year of publication, type of study, journal name, materials and methods used, methods for miR-150 detection, detailed findings, correlation of miR-150 levels with clinical parameters, authors' interpretation of the findings, other miRNAs studied, and population characteristics and it can be seen in Table 2.This comprehensive extraction process facilitated a holistic understanding of miR-150's role in MS. Other Sources' Research Results In addition to the articles identified through the initial database search, further literature was uncovered by examining the citations within those articles.This citation tracking yielded an additional 17 articles and an additional one from website research.However, 9 were duplicate of the studies found via database search and 4 of these documents could not be retrieved due to unavailability of full-text versions or access restrictions imposed by paywalls.Consequently, only five studies were available for full evaluation. The retrieved studies underwent a rigorous quality assessment using the Newcastle-Ottawa Scale, a widely recognized tool for evaluating the quality of non-randomized studies in meta-analyses.One of the studies were excluded due to low NOS scores (5 or under), indicating potential bias and insufficient methodological rigor.This left four studies deemed suitable for inclusion in the review, meeting the predefined quality standards.The inclusion of these supplementary studies enriched the review by providing additional insights and corroborative findings.The data extracted from these additional studies mirrored the parameters used for the initially identified articles and is shown also in Table 2. In addition to the data extracted in Table 2, which were used for the literature review, data regarding the statistical analysis were also extracted and are presented in Table 3.This rigorous approach ensured the reliability and consistency of the data extraction process, contributing to the robustness of the systematic review and meta-analysis.miR-150, along with other miRNAs, showed altered expression levels in MS patients compared to healthy controls and that its expression levels could be linked to disease activity and progression. There is significant enthusiasm about using miRNAs as biomarkers, driven by their stability and ease of detection.However, thereare a wide range of results reported by different research groups, and its hard to find a perfect miRNA as a biomarker in MS. Reliability and Validity of Data Extraction Assessment Ensuring that the data adheres to the FAIR (Findable, Accessible, Interoperable, and Reusable) and 5-star open data principles was a crucial aspect of our research protocol [70]. To achieve this, we implemented several measures throughout the data management and sharing process.The research team meticulously organized and documented all data with comprehensive metadata, including variable descriptions, collection methods, and preprocessing steps.The data were stored in the Open Science Framework under the registration code osf.io/zmwgh, ensuring long-term accessibility.By making the data available in an open, machine-readable format and providing clear licensing information, the team facilitated its reuse and integration with other datasets, promoting transparency and reproducibility within the broader scientific community. To achieve a reliable Cohen's kappa value [51], a structured and meticulous approach was implemented during the data extraction phase.Initially, three independent reviewers (C.V.A., A.M.F., and C.A.M.A.) participated in the data extraction process.Each reviewer was assigned a subset of the included studies and instructed to independently extract relevant data using a standardized data extraction form.Following the independent extraction, the results from each reviewer were compiled and compared.Discrepancies were identified and addressed in consensus meetings.In cases where consensus could not be reached, a fourth expert (C.R.F.) was consulted to provide an impartial opinion.Table 4 presents the Cohen's kappa statistic for inter-rater reliability during the data extraction process. Risk of Bias Assessment To ensure the validity of the findings in our literature review on miR-150 as a biomarker for MS, a comprehensive assessment of the risk of bias was conducted for each included study.This process utilized the NOS for observational studies, as recommended by Stang et al. [52], and the AMSTAR 2 tool for systematic reviews and meta-analyses [53].The results of these assessments are depicted in Figure 3 and detailed in Tables 5 and 6. included study.This process utilized the NOS for observational studies, as recommended by Stang et al. [52], and the AMSTAR 2 tool for systematic reviews and meta-analyses [53].The results of these assessments are depicted in Figure 3 and detailed in Table 5; Table 6.The judgements are categorised as high risk included study.This process utilized the NOS for observational studies, as recommended by Stang et al. [52], and the AMSTAR 2 tool for systematic reviews and meta-analyses [53].The results of these assessments are depicted in Figure 3 and detailed in Table 5; Table 6., moderate risk included study.This process utilized the NOS for observational studies, as recommended by Stang et al. [52], and the AMSTAR 2 tool for systematic reviews and meta-analyses [53].The results of these assessments are depicted in Figure 3 and detailed in Table 5; Table 6., or low risk included study.This process utilized the NOS for observational studies, as recommended by Stang et al. [52], and the AMSTAR 2 tool for systematic reviews and meta-analyses [53].The results of these assessments are depicted in Figure 3 and detailed in Table 5; Table 6.for each risk of bias item.(b) A Summary Plot illustrating the review authors' assessments of the risk of bias for each item, presented as percentages for all included studies.The assessments are categorised as high risk included study.This process utilized the NOS for observational studies, as recommended by Stang et al. [52], and the AMSTAR 2 tool for systematic reviews and meta-analyses [53].The results of these assessments are depicted in Figure 3 and detailed in Table 5; Table 6.included study.This process utilized the NOS for observational studies, as recommended by Stang et al. [52], and the AMSTAR 2 tool for systematic reviews and meta-analyses [53].The results of these assessments are depicted in Figure 3 and detailed in Table 5; Table 6., and low risk included study.This process utilized the NOS for observational studies, as recommended by Stang et al. [52], and the AMSTAR 2 tool for systematic reviews and meta-analyses [53].The results of these assessments are depicted in Figure 3 and detailed in Table 5; Table 6.The judgements are categorised as high risk , moderate risk , or low risk for each risk of bias item.(b) A Summary Plot illustrating the review authors' assessments of the risk of bias for each item, presented as percentages for all included studies.The assessments are categorised as high risk , moderate risk , and low risk [60][61][62][63][64][65][66][67][68][69].The NOS criteria encompass three overarching perspectives: the selection of study groups, the comparability of groups, and the ascertainment of either the exposure or result of interest [52].Each study was scored based on these criteria, with a maximum possible score of nine points indicating the highest quality and lowest risk of bias.Studies scoring below five were excluded due to significant methodological flaws that could compromise the findings.C1-The exposed cohort's representativeness (0-1 points); C2-The non-exposed cohort is chosen (0-1 points); C3-Exposure estimation (0-1 point); C4-Proof that the desired outcome was absent at the beginning of the investigation (0-1 point); C5-Cohort comparability based on design or analysis (0-2 points); C6-Evaluation of results (0-1 points); C7-Was the follow-up period sufficient for results to occur (0-1 points); C8 Proper cohort follow-up (0-1 points).In the context of the reviewed observational studies, the NOS assessment revealed varying levels of bias.Most studies demonstrated robust selection processes, with welldefined MS patient cohorts and appropriate control groups.However, some studies were limited by insufficient detail on the comparability of groups, particularly concerning adjustments for potential confounding variables such as age, sex, and disease duration.Additionally, the ascertainment of miR-150 levels often lacked consistency in the methods used, with variations in detection techniques potentially influencing the results.These elements of bias are graphically represented in Figure 3, providing a visual overview of the quality across studies. For the systematic reviews and meta-analyses, the AMSTAR 2 tool was employed to evaluate methodological rigor [53].This tool assesses various domains, including the comprehensiveness of the literature search, the presence of an a priori design, the status of publication bias assessment, and the quality of included studies.Our evaluation highlighted that while most reviews adhered to stringent search strategies and included high-quality studies, some did not adequately address publication bias or provide detailed protocols for data extraction and synthesis.Table 6 presents the AMSTAR 2 scores for each review, illustrating their adherence to critical quality criteria. Overall, the risk of bias assessment conducted in this review serves as a critical filter, ensuring that only high-quality evidence contributes to the evaluation of miR-150 as a biomarker for MS.By meticulously assessing and documenting the quality of each study, we aim to provide a robust and reliable synthesis of the available evidence, guiding future research and clinical applications in this promising area of study. Strength of Evidence Assessment To thoroughly evaluate the quality and strength of evidence presented in the studies included in our review, we applied the GRADE (Grading of Recommendations Assessment, Development, and Evaluation) tool [71].This systematic approach helps in rating the quality of evidence and strength of recommendations across various domains.The application of GRADE in this context ensures a transparent, consistent, and reliable assessment of the evidence related to miR-150 as a biomarker for MS.The results of the GRADE assessment are summarized in Table 7. Table 7.The GRADE approach [71] was used to assess the strength of evidence from the ten listed studies.The GRADE tool assesses evidence based on several factors: study design, risk of bias, consistency, directness, precision, and other considerations such as publication bias and study limitations.Each of these domains is evaluated to provide an overall quality rating of high, moderate, low, or very low [72]. The first phase entails classifying the research according to their design, with randomised trials considered as high-quality evidence and observational studies considered as low-quality data [71].Given the nature of the research on miR-150, most studies were observational, which inherently begins at a lower level of evidence.This was adjusted by evaluating the risk of bias using the NOS for observational studies and the AMSTAR 2 tool for systematic reviews. Consistency refers to the similarity of estimates across studies.In our review, it was observed a moderate level of consistency in the findings regarding miR-150 levels across different studies.Although some variations were noted, the general trend supported the role of miR-150 as a potential biomarker for MS, thereby warranting a moderate rating for consistency [72]. Directness examines whether the evidence directly answers the research question without needing deduction.Most studies directly measured miR-150 levels in MS patients and controls, thereby providing direct evidence.However, some studies included inferences based on broader miRNA profiles, which slightly affected the directness rating [72]. Precision evaluates the certainty around the effect estimates, usually represented by confidence intervals.The studies included had varying degrees of precision, with some presenting narrow confidence intervals indicating high precision, while others had broader intervals.Overall, this resulted in a moderate rating for precision [72]. Other factors, including the potential for publication bias and study limitations, were also considered.While some publication bias was evident due to the predominance of positive findings, the thorough screening and inclusion criteria helped mitigate this concern.Study limitations were primarily related to sample sizes and methodological variations [71,72]. The application of the GRADE tool has provided a structured and transparent evaluation of the evidence supporting miR-150 as a biomarker for MS.While most of the evidence is of moderate quality, indicating some limitations and variability, the overall findings are consistent and promising.The high-quality systematic reviews further bolster the case for miR-150, suggesting that with more rigorous future research, miR-150 could become a valuable biomarker in clinical settings. Synthesis of Findings The comprehensive review of the literature on miR-150 as a biomarker in MS presents a compelling case for its potential utility in the diagnosis and monitoring of the disease [73].Across multiple studies, miR-150 has been consistently identified as significantly deregulated in MS patients, underscoring its role in the pathophysiology of MS.For instance, Martinelli-Boneschi et al. identified miR-150 among the top deregulated miRNAs in MS patients compared to healthy controls, suggesting its potential as a biomarker for MS [64]. Furthermore, studies like those by Quintana et al. have demonstrated that elevated miR-150 levels are associated with more severe forms of MS, such as in patients with lipid-specific oligoclonal IgM bands.This association underscores miR-150's potential role in identifying patients with a more aggressive disease course [61].Similarly, Perdaens et al. found that miR-150 levels were significantly upregulated in the cerebrospinal fluid (CSF) during MS relapses, linking its expression directly to disease activity and suggesting its potential as a marker for monitoring disease flares [68]. The role of miR-150 in modulating immune responses has been highlighted in several studies.Sondergaard et al. noted that miR-150 levels in T cells of MS patients were significantly correlated with inflammatory responses, further reinforcing its potential as a biomarker for MS and its involvement in the disease's inflammatory pathways.It was also found increased miR-150 expression in active MS lesions, supporting its role in the inflammatory process and its potential as a therapeutic target due to its regulatory effects on T cell responses [69]. In addition to its role in inflammation, miR-150 has been implicated in the epigenetic regulation of MS.Scaroni et al. discussed significant alterations in miR-150 levels, particularly in patients with active disease, emphasizing its importance in the epigenetic landscape of MS and its potential as a biomarker for disease monitoring [66].This aligns with findings by Al-Temaimi et., who noted that miR-150 levels were consistently deregulated across different stages of MS, correlating with disease progression and severity [65]. Bergman et al. specifically investigated the relationship between miR-150 levels and clinical parameters such as EDSS scores, finding significant correlations that suggest miR-150 can serve as a biomarker for assessing disease severity and progression [62].These correlations were supported by Piket et al., who explored the mechanistic role of miR-150 in MS pathogenesis and found that it modulates several inflammatory pathways, contributing to neuroinflammation and demyelination [67]. The synthesis of these findings underscores miR-150's multifaceted role in MS.Not only is it a potential biomarker for disease detection and progression, but it also plays a critical role in the underlying pathogenic mechanisms of MS [61,65,67].This was also discussed by Martinez et al. that noted that the consistent deregulation of miR-150 in MS patients across different studies highlights its robustness as a biomarker.The correlations with clinical parameters such as EDSS and disease activity further validate its utility in clinical settings [60]. Additionally, miR-150's involvement in inflammatory and epigenetic pathways presents a potential therapeutic target [60,63].The studies reviewed provide strong evidence that miR-150 influences T cell responses and other immune processes crucial to MS pathogenesis.This dual role as a biomarker and therapeutic target offers promising avenues for future research and clinical application as noted by Gandhi [63]. The integration of miR-150 into clinical practice could enhance the precision of MS diagnosis and monitoring [61,62,64,69].By providing a molecular marker that correlates with disease activity and severity, clinicians can better tailor treatment strategies to individual patients [65].Moreover, miR-150's potential as a therapeutic target opens new possibilities for intervention strategies aimed at modulating its expression to mitigate disease progression [66,68]. In summary, the literature strongly supports miR-150 as a novel biomarker for MS.Its consistent deregulation in MS patients, correlation with clinical parameters, and involvement in key pathogenic processes make it a promising candidate for further research and clinical application.The potential for miR-150 to improve diagnostic accuracy and provide new therapeutic targets underscores its importance in the ongoing effort to better understand and treat MS. Statistical Analysis Results The diagnostic and prognostic values of miR-150 in MS were synthesized by calculating pooled sensitivity, specificity, positive likelihood ratios (PLR), and negative likelihood ratios (NLR) using a bivariate random-effects model.The pooled sensitivity of 0.88 indicates that the test accurately identifies 88% of true positive cases, demonstrating high efficacy in detecting the condition when present.The pooled specificity of 0.82 reflects the test's capability to correctly identify 82% of true negative cases, effectively excluding the condition in healthy individuals.With a pooled positive likelihood ratio (PLR) of 4.87, a positive test result is approximately 4.87 times more likely in individuals with the condition compared to those without, underscoring the test's strong discriminative power.Additionally, the area under the Receiver Operating Characteristic (ROC) curve (AUC) of 0.89 indicates excellent overall accuracy, highlighting the test's substantial utility in clinical settings for effective disease detection and exclusion.This values, presented in Table 8, highlights miR-150's potential as a reliable biomarker for MS diagnosis.To further illustrate the diagnostic accuracy, a summary receiver operating characteristic (SROC) curve was generated.The area under the SROC curve (AUC) was 0.89, suggesting that miR-150 possesses strong overall diagnostic performance as seen in Figure 4. Subgroup analyses and meta-regressions were conducted to explore potential sources of heterogeneity, such as differences in study design, patient populations, or miR-150 measurement techniques.These analyses confirmed that the variations in the studies did not significantly impact the overall diagnostic accuracy of miR-150, underscoring its robustness as a biomarker.In conclusion, the statistical analysis supports the utility of miR-150 as a diagnostic and prognostic biomarker in MS.The pooled sensitivity, specificity, PLR, and NLR values, along with the SROC curve, demonstrate miR-150's strong diagnostic performance, making it a valuable tool in the clinical management of MS.In conclusion, the statistical analysis supports the utility of miR-150 as a diagnostic and prognostic biomarker in MS.The pooled sensitivity, specificity, PLR, and NLR values, along with the SROC curve, demonstrate miR-150's strong diagnostic performance, making it a valuable tool in the clinical management of MS. miR-155 is another miRNA that has been widely studied in the context of MS [75].It is known for its role in promoting inflammation through the regulation of immune cell differentiation and activation.Elevated levels of miR-155 have been associated with active MS lesions and increased disease activity [75,76].Similarly, miR-146a has been implicated in the modulation of inflammatory responses by targeting key signaling molecules in the NF-κB pathway [77].These miRNAs, while effective in reflecting disease activity, do not show the same level of specificity for T cell subsets as miR-150 [67].miR-150 is particularly advantageous due to its specific expression in CD4+ and CD8+ T cells [37].This miRNA plays a critical role in the differentiation and function of these cells, which are central to the pathogenesis of MS [10].The ability to monitor miR-150 levels provides direct insights into the immune dysregulation occurring in MS, offering a more targeted biomarker compared to others.The correlation between miR-150 levels and disease activity, as well as its impact on T cell functionality, makes it a superior marker for monitoring MS progression [78]. Traditional biomarkers for inflammation, such as C-reactive protein (CRP), interleukines (e.g., IL-6, IL-17) and erythrocyte sedimentation rate (ESR), are widely used [79] but lack specificity for MS.These markers are elevated in a variety of inflammatory conditions, making them less reliable for MS-specific monitoring [80].In contrast, miR-150 offers a more disease-specific approach, given its direct involvement in the immune pathways relevant to MS [81]. For example Elevated CRP levels indicate systemic inflammation and are commonly used to diagnose and monitor inflammatory conditions [82].However, CRP is not specific to MS and can be elevated in a variety of other conditions such as infections, autoimmune diseases, and even physical trauma.Thus, while CRP can reflect inflammation, it does not provide specific information about MS-related immune activity [83].In the same manor, interleukins such as IL-6 and IL-17 are involved in the inflammatory response and have been studied in the context of MS.Elevated levels of these cytokines can indicate immune system activation and inflammation [84].However, similar to CRP and ESR, these markers are not exclusive to MS and can be elevated in various inflammatory and autoimmune conditions. In terms of cost, miRNA profiling, including that of miR-150, can be more expensive than traditional inflammatory markers due to the need for specialized equipment and techniques like qPCR and next-generation sequencing [85].However, the specificity and reliability of miR-150 in reflecting disease activity and progression in MS can justify the higher cost.The long-term benefits of precise disease monitoring and potentially improved patient outcomes can outweigh the initial investment in miRNA-based diagnostics. Recent studies have explored the presence of miRNAs, including miR-150, in saliva [86].This presents an exciting opportunity for non-invasive testing.In saliva, miR-150, along with other miRNAs, was used in the field of forensic science research such as identifying bodily fluids at crime scenes [87].While miR-150's presence in saliva was confirmed for forensic applications, its potential as a noninvasive biomarker for diagnosing or monitoring MS has not been explored.The detection of miR-150 in saliva offers a novel approach for MS diagnostics, allowing for easier and more frequent monitoring without the need for blood draws.This gap suggests a promising area for future research to assess miR-150's diagnostic value in MS using saliva samples. The use of salivary miR-150 as a non-invasive biomarker for MS holds significant potential.This approach could facilitate regular monitoring of disease activity and progression, improving patient management and potentially leading to better outcomes.Future research should focus on validating the reliability and sensitivity of salivary miR-150 in large-scale clinical trials.Additionally, the development of cost-effective and user-friendly salivary miRNA detection kits could revolutionize MS diagnostics [88]. miR-150 shows distinct patterns in various clinical forms of MS (MS), particularly in relapsing-remitting MS (RRMS) and secondary progressive MS (SPMS).It was noted that miR-150 is significantly downregulated in SPMS compared to RRMS, suggesting its potential as a biomarker for the transition between these stages and for monitoring disease progression and inflammation in MS [61,64,65]. Overall, miR-150 stands out as a promising biomarker for MS due to its specific association with T cell subsets and its correlation with disease activity.While other miRNAs and traditional inflammatory markers have their merits [89], miR-150 offers a more targeted and potentially more accurate reflection of MS pathogenesis.The exploration of non-invasive testing methods, such as salivary miR-150 detection, further enhances its applicability in clinical settings.Continued research and technological advancements are likely to solidify the role of miR-150 in MS diagnostics and monitoring, paving the way for more personalized and effective patient care. Conclusions This study highlights the potential of miR-150 as a novel biomarker for MS, emphasizing its specificity and sensitivity compared to traditional inflammatory markers.Unlike general markers such as CRP and ESR, miR-150 offers a more targeted reflection of immune activity directly related to MS pathogenesis.The correlation of miR-150 levels with key clinical parameters such as the Expanded Disability Status Scale (EDSS) and disease progression status underscores its utility in not only diagnosing MS but also monitoring its progression and therapeutic responses. The synthesis of data from various studies demonstrates that miR-150 is significantly associated with the activity of CD4+ and CD8+ T cells, which play a crucial role in the autoimmune response characteristic of MS.This association provides a mechanistic link between miR-150 expression and MS pathology, reinforcing its relevance as a diseasespecific biomarker.Furthermore, the detection of miR-150 using advanced techniques such as quantitative PCR (qPCR) and next-generation sequencing (NGS) offers high sensitivity and precision, enabling the detection of subtle changes in disease activity that traditional markers may miss. In conclusion, miR-150 stands out as a promising biomarker for MS due to its specificity to the disease, strong correlation with clinical parameters, and potential for noninvasive detection.Future research should focus on further validating these findings in larger, diverse cohorts and exploring the integration of miR-150 testing into routine clinical practice.The development of cost-effective and scalable detection methods will be crucial for the widespread adoption of miR-150 as a standard biomarker in MS diagnosis and management. Figure 2 . Figure 2. PRISMA 2020 flow diagram of the systematic review and meta-analysis, adhering to the standards established by Kahale et al. [48].It illustrates the process of selecting studies for the analysis. Figure 2 . Figure 2. PRISMA 2020 flow diagram of the systematic review and meta-analysis, adhering to the standards established by Kahale et al. [48].It illustrates the process of selecting studies for the analysis. Figure 3 . Figure 3. (a) Summary of The Risk of Bias for each included study, as assessed by the review authors.The judgements are categorised as high risk , moderate risk , or low risk for each risk of bias item.(b) A Summary Plot illustrating the review authors' assessments of the risk of bias for each item, presented as percentages for all included studies.The assessments are categorised as high risk , moderate risk , and low risk [60-69]. Figure 3 . Figure 3. (a) Summary of The Risk of Bias for each included study, as assessed by the review authors. Figure 3 . Figure 3. (a) Summary of The Risk of Bias for each included study, as assessed by the review authors.The judgements are categorised as high risk , moderate risk , or low risk for each risk of bias item.(b) A Summary Plot illustrating the review authors' assessments of the risk of bias for each item, presented as percentages for all included studies.The assessments are categorised as high risk , moderate risk , and low risk [60-69]. Figure 3 . Figure 3. (a) Summary of The Risk of Bias for each included study, as assessed by the review authors.The judgements are categorised as high risk , moderate risk , or low risk for each risk of bias item.(b) A Summary Plot illustrating the review authors' assessments of the risk of bias for each item, presented as percentages for all included studies.The assessments are categorised as high risk , moderate risk , and low risk [60-69]. Figure 3 . Figure 3. (a) Summary of The Risk of Bias for each included study, as assessed by the review authors.The judgements are categorised as high risk , moderate risk , or low risk for each risk of bias item.(b) A Summary Plot illustrating the review authors' assessments of the risk of bias for each item, presented as percentages for all included studies.The assessments are categorised as high risk , moderate risk , and low risk [60-69]. Figure 3 . Figure 3. (a) Summary of The Risk of Bias for each included study, as assessed by the review authors.The judgements are categorised as high risk , moderate risk , or low risk for each risk of bias item.(b) A Summary Plot illustrating the review authors' assessments of the risk of bias for each item, presented as percentages for all included studies.The assessments are categorised as high risk , moderate risk , and low risk [60-69]. Figure 3 . Figure 3. (a) Summary of The Risk of Bias for each included study, as assessed by the review authors.The judgements are categorised as high risk , moderate risk , or low risk for each risk of bias item.(b) A Summary Plot illustrating the review authors' assessments of the risk of bias for each item, presented as percentages for all included studies.The assessments are categorised as high risk , moderate risk , and low risk [60-69]. Figure 3 . Figure 3. (a) Summary of The Risk of Bias for each included study, as assessed by the review authors. Table 1 . Examples of Keyword Combinations Used in Search. Table 2 . Data extracted from the articles included in the reasesch. Table 3 . Data collected from the studies included in metanalysis. Table 4 . Cohen's kappa statistic for inter-rater reliability during the data extraction process. Cohen's kappa statistic measures the inter-rater reliability between pairs of reviewers by accounting for the agreement occurring by chance.Values range from −1 to 1, where: 0 indicates no agreement better than chance; 1 indicates perfect agreement; Negative values indicate agreement worse than chance. Table 5 . Newcastle-Ottawa Scale Assessment of the Chosen Studies. Table 5 . Newcastle-Ottawa Scale Assessment of the Chosen Studies. Table 5 . Newcastle-Ottawa Scale Assessment of the Chosen Studies. Table 5 . Newcastle-Ottawa Scale Assessment of the Chosen Studies. Table 5 . Newcastle-Ottawa Scale Assessment of the Chosen Studies. Table 5 . Newcastle-Ottawa Scale Assessment of the Chosen Studies. Table 5 . Newcastle-Ottawa Scale Assessment of the Chosen Studies. Table 5 . Newcastle-Ottawa Scale Assessment of the Chosen Studies.
2024-08-03T15:12:00.122Z
2024-07-31T00:00:00.000
{ "year": 2024, "sha1": "c31956a2c5484d1c36256350f942f7496604bfa4", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "773943888dde8b0d78822626adeb5e2ed64e8cda", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7214258
pes2o/s2orc
v3-fos-license
Universal iron fortification of foods: the view of a hematologist With the objective of reducing the high incidence of iron deficiency anemia, the Brazilian National Health Surveillance Agency (ANVISA) adopted Resolution 344 in December 2002, which made the addition of iron and folic acid to all industrialized wheat and maize flours in Brazil compulsory. After a series of doubts about this universal measure of food fortification, a review of case reports on long-term medicinal iron intake published in the medical literature was undertaken to investigate the clinical behavior of this hematological conduct. Long-term medicinal iron ingestion is an extremely rare and serious situation. The data suggest that there are cases of hemochromatosis in women whose illnesses were accelerated with this therapy. It is very difficult to determine the amount of iron ingested by Brazilian citizens in the current system of fortification, but there is evidence that there has been an appreciable increase. Although iron fortification of food has been recognized by some authors as a good strategy to combat iron deficiency, some nation shave abandoned this measure. The patient with hemochromatosis is the most affected by compulsory iron fortification and as this disease is now considered a public health problem, we believe that Resolution 344 of ANVISA should be reviewed in order to find a solution beneficial to all segments of the Brazilian population; one should not try to correct one condition (iron deficiency) by exacerbating another (acceleration of iron overload cases). Introduction Iron is part of the hemoglobin molecule whose main function is to carry oxygen from the lungs to the tissues. In the body other molecules containing iron, such as myoglobin and some enzymes, also have important functions. The total amount of iron in the human body is 4.0 grams with 2.5 grams in hemoglobin, 0.5 grams in myoglobin and enzymes and 1.0 gram as a reserve. For men, 0.9 mg of iron is required every day to restore the amount lost due to the shedding of cells from the intestine, skin and urinary tract. For women in the childbearing age, iron requirements are about 1.3 mg per day, due to menstrual losses. In pregnant women this rises even further to 3.0 mg per day. In the period of growth the need for iron is also high. There is no physiological mechanism of excretion of this metal and the shedding of cells in the gastrointestinal tract is the only effective way that the body can eliminate it. Iron deficiency Iron deficiency is the most common hematological disorder involving about 30% of the world population. This disease most often affects children, women of child-bearing age and pregnant women. Its prevalence varies according to region and socioeconomic conditions. In Brazil this illness has great epidemiological significance. Treatment consists in the oral or parenteral replacement of iron, blood transfusions (when necessary) and iron fortification of foods. It is essential to determine the cause of the anemia. Diseases related to iron overload Much research has been carried out in recent years to better understand these diseases. Cançado and Chiattone (1) , in a recent work, presented the main clinical syndromes related to iron overload including: 1 -primary (hereditary hemochromatosis types 1, 2, 3, 4 and other types); 2 -Secondary transfusional diseases (including chronic hemolytic anemias); and 3 -Non-transfusion diseases such as chronic liver disease, and African and iatrogenic iron overload. Hereditary hemochromatosis is the prototype of diseases linked to iron overload; it is an autosomal recessive disease resulting from an abnormality of the hemochromatosis gene (HFE) on chromosome 6, most commonly involving the C282Y mutation. There is a high prevalence in Northern Europe countries, but it has also been reported in several Brazilian publications (2)(3)(4) . Other With the objective of reducing the high incidence of iron deficiency anemia, the Brazilian National Health Surveillance Agency (ANVISA) adopted Resolution 344 in December 2002, which made the addition of iron and folic acid to all industrialized wheat and maize flours in Brazil compulsory. After a series of doubts about this universal measure of food fortification, a review of case reports on long-term medicinal iron intake published in the medical literature was undertaken to investigate the clinical behavior of this hematological conduct. Long-term medicinal iron ingestion is an extremely rare and serious situation. The data suggest that there are cases of hemochromatosis in women whose illnesses were accelerated with this therapy. It is very difficult to determine the amount of iron ingested by Brazilian citizens in the current system of fortification, but there is evidence that there has been an appreciable increase. Although iron fortification of food has been recognized by some authors as a good strategy to combat iron deficiency, some nations have abandoned this measure. The patient with hemochromatosis is the most affected by compulsory iron fortification and as this disease is now considered a public health problem, we believe that Resolution 344 of ANVISA should be reviewed in order to find a solution beneficial to all segments of the Brazilian population; one should not try to correct one condition (iron deficiency) by exacerbating another (acceleration of iron overload cases). chromosomal abnormalities are H63D, S65C and V256I. This disease results from an inappropriate increase in iron absorption by the intestines with the surplus metal being accumulated in the tissues. The disease evolves with liver cirrhosis (30% complicated with liver cancer), splenomegaly, severe heart disease, diabetes, dark skin, endocrine disorders and neuropathy. Resolution 344 of ANVISA, December 13, 2002 The Brazilian government, concerned about the high incidence of iron deficiency anemia in children and pregnant women in the country, instituted a policy of mass or universal food fortification. Thus, since December 13, 2002, Resolution 344 of ANVISA, requires the addition of at least 4.2 mg of iron and 150 mcg of folic acid to each 100 grams of industrialized wheat and maize flour in Brazil (5) . This measure was published on December 18, 2002 with 18 months for companies to comply; this period ended on June 18, 2004. In a recent work entitled "Considerations on the food fortification with iron and folic acid" published in the Revista Brasileira de Hematologia e Hemoterapia we demonstrated our concern about this measure (6) . Iron and folic acid are two medications used in medicine and, as such, have both beneficial and adverse effects and thus can be harmful to health. The beneficial effects have been widely analyzed in works on the theme, but little has been discussed about the toxicity of these drugs in the healthy population and in patients with iron overload. As these flours are basic to our diet, it can be concluded that every citizen in the country, regardless of age, gender, ethnic background, occupation, socioeconomic condition, healthy or carrier of some illness, began to ingest iron and folic acid every day, whether needed or not. Some questions were asked related to: the use of these two important medicines in more than 190 million people, without any medical control; the risk of administering iron to anemic patients without first discarding the hypothesis of anemia secondary to a gastrointestinal neoplasm for example and whether mandatory food fortification with iron does not worsen the health of ordinary people or patients suffering from iron overload illnesses. Prolonged intake of medicinal iron The literature often includes prolonged medicinal iron ingestion as a cause of illness by iron overload (1,(7)(8)(9)(10) . A review of these cases would therefore be an excellent opportunity to clarify how this hematologic condition behaves. Iron overload due to prolonged medicinal iron ingestion is an extremely rare occurrence. After an exhaustive search we found 12 published cases whose data are presented in Table 1. Ten were female with ages ranging from 10 to 77 years, the treatment time was 5 to 49 years and the total amount of ingested medicine (determined in eight patients) ranged from 500 to 26,300 g. Some important information was found in respect to these rare diseases: -a great predominance of women as was expected, as it would not take such a long time for the disease to manifest in men with such large intakes of iron; -Case 3 took 26,320 grams of iron and presented advanced liver cirrhosis, portal hypertension, anemia, dark skin, heart murmurs and died in a hepatic coma with ammonia poisoning; two other patients also had portal hypertension, and died after the rupture of esophageal varices; -other clinical manifestations were: diabetes (5 cases), heart disease (6 cases), and dark skin (9 cases); -Case 11 was the only case in which iron administration was intramuscular; -Case 12 was the only registered occurrence of a child and, along with the Cases 9 and 11, there was an improvement in the disease with therapy; -with the exception of the child, all cases were seen prior to the discovery of chromosomal mutations; -based on data of the autopsy or biopsy, the authors found that six cases were instances of hemochromatosis in women. Discussing this subject, Beutler (7) , in the 8th edition of Williams Hematology, states: The homeostatic mechanisms of the body are such that the improper administration of oral iron is very unlikely to produce clinically significant iron overload. Of the few cases described all except one (a child without tissue damage) were documented before the HFE gene clone appeared, leaving open the distinct possibility that patients were simply cases of hemochromatosis whose disease was accelerated by the excessive intake of iron. Amount of iron ingested by a Brazilian citizen per day It is very difficult to determine the amount of iron that is ingested every day by the Brazil population in the current system of universal food fortification. In addition to the amount in a normal diet (which is about 14 mg per day), Brazilians ingest iron from fortified foods and, occasionally, from products enriched for commercial purposes. Recently the results of an inspection carried out in ten mills of Fortaleza were reported by the Health Department of the State of Ceará in the northeast of Brazil. The results show that there is a wide variation in the iron content of the wheat flour produced by these mills. The iron content of samples ranged from 4.23 mg to 6.65 mg per 100 grams and thus the levels were approved by the local authorities as they reached the minimum value specified by law. Hence, the daily amount of iron that is ingested by a Brazilian citizen will also depend on the amount of metal in each batch of flour. The mandatory fortification program in Brazil has been in force for 3072 days now. To get an idea of how much iron a Brazilian citizen ingested during this period we used a simple simulation: one kilogram of wheat flour produces 18 bread rolls with 3.3 mg of iron in each. If a person eats three rolls per day (that is 10 mg of iron per day) over 3072 days he has ingested a total of 30,720 mg of iron in addition to the amount in the normal diet. In this way three situations arise: -patients with iron deficiency anemia may benefit by controlling the lack of metal; -the normal person, who has a good control system of ingested iron, absorbs only what he needs; however, he took a total of 30,720 mg metal, which is the equivalent of 762 40-mg iron tablets. In this respect the book, Guidelines on food fortification with micronutrients (23) of the World Health Organization and the Food and Agriculture Organization of the United Nations (WHO/FAO) already foresaw: when a population is exposed to increased nutrients in food it is to be expected that many will benefit and others will not. -on the other hand the patient with hemochromatosis, who has an increased rate of iron absorption (1) , may retain high deposits of iron, a burden that will be added to existing iron in the tissues. Although this amount of iron to far different to that observed in the cases described above with 'medical iron intake', it is sure that some cases of hemochromatosis in the Brazilian population will be speeded up. An aggravating circumstance is that it has not been defined when the obligatory fortification of food according to Resolution 344 of ANVISA should be discontinued. Universal fortification of foods with iron: a 'low-cost practical solution' In truth, Resolution 344 of ANVISA is a practical measure, but it is not cost effective for the patient, who pays all the costs of treatment. Thus, a 40-mg tablet of ferrous sulfate costs about 33 cents (US $0.16) in pharmacies; over one month, a patient with iron deficiency, taking one tablet per day, will spend R$9.90 (US $4,76). With fortified diets, taking into account that one kilogram of bread contains 42 mg of iron with a cost of R$8.00 (US $3.84) in supermarkets, the patient would spend R$240.00 in one month for the same treatment. Of course, in this latter situation, the nutritional value of bread must be remembered. In Guidelines on food fortification with micronutrients (23) , the authors affirm: "Fortified foods often fail to reach the poorest segments of the general population, which have the greatest risk of micronutrient deficiencies. This is because these groups always have restricted access to fortified foods due to their low income and the underdeveloped distribution channel". Hence, we question whether, with universal fortification, we are not favoring only the more privileged classes? When the cost of a treatment is calculated, expenses related to complications that arise with the therapy must also be considered. Universal fortification may result in an increase in patients with iron overload (hemochromatosis, transfusional hemosiderosis, etc.) which will certainly increase the cost to the healthcare system, with the need of frequent phlebotomies, high-cost treatments with iron chelators, treatment for diabetics and, exceptionally, transplantation for liver cancer. The World Health Organization and food fortification Food fortification has been considered by some authors to be the best strategy to increase iron intake of a population, especially for children and pregnant women (24,25) . The World Health Organization recognizes four types of fortification: universal or mass, open market (commercial), targeted (for high-risk groups), Rev Bras Hematol Hemoter. 2012;34(6):459-63 frequent phlebotomies. An increased absorption of iron can also be expected in anemic patients with iron overload". Adamson, in the 6th edition of the book Harrison (28) , says that there has been a decline in interest for the supplementation of iron in bread and cereals due to the prevalence of the hemochromatosis gene, which would result in a high risk of overload in these patients. Market-driven Fortification in Brazil Guidelines on food fortification with nutrients (23) recognizes that market-driven fortification can play a positive role in public health to needy populations by offering some products that they need. The choice is voluntary and is characterized by being able to add a higher quantity of a particular nutrient that cannot be done in universal fortification because of technical and safety issues. This type of fortification is more widespread in industrialized countries. Thus, by giving the population, in particular children, a wide range of products enriched with iron, a remarkable contribution will be given to the Brazilian Government to solve the problem of iron deficiency. The hemochromatosis patient The patient with hemochromatosis is the one who is most handicapped by Resolution 344 of ANVISA, as he is practically forced to ingest a larger amount of iron. Hemochromatosis was first described by Trousseau in 1865. It was considered a rare disease, an inborn error of the metabolism. With the discovery of chromosomal mutations at the end of the last century this was proven different; there is a high frequency of the disease not only in Nordic countries but in other regions of the world. In Brazil it is frequent, judging by numerous papers presented in National Congresses (2)(3)(4) . The disease is characterized by an inappropriate increase in intestinal absorption of iron and its accumulation in organs and tissues including the liver, spleen, heart, pancreas, endocrine glands, skin and joints. Clinical manifestations generally appear in the third and fourth decade of life, accompanied by general symptoms of fatigue, depression, joint pain, abdominal pain, hair loss etc. As a result of the idea that it is a rare illness, many patients in the United States consult several different physicians before the disease is diagnosed (29) . It is of fundamental importance that the diagnosis is made early because phlebotomy therapy regresses clinical manifestations such as cirrhosis and increases the survival rate of these patients to close to that of normal people. Due to the frequency of the disease and the need for an early diagnosis, hemochromatosis should be considered a public health problem (30) . Hemochromatosis should have the same status as high blood pressure, diabetes, obesity and metabolic syndrome and should have research programs to better understand the disease. Conclusion We are not against iron fortification of foods in its different modes: targeted or market-driven. This would mean to ignore the numerous studies carried out by the group from Sao Paulo in this important area of food fortification, which aims to combat iron deficiency. However, we are against mass food fortification, because we believe that one cannot correct one problem (iron and household and community fortification. The guidelines of these fortifications are described in detail in the book Guidelines on food fortification with micronutrients (23) . This is a work of great scientific content published by the following editors: Six other distinguished personalities from the fields of economics, social sciences, nutrition, and food and nutritional sciences also collaborated in this magnificent work. It is strange that no hematologist was included in a work by the World Health Organization with so many renowned researchers; a hematologist, of course, would increase the understanding about the effects of offering iron to needy populations. Universal iron fortification of foods Guidelines on food fortification with micronutrients (23) , defines fortification as "the practice of deliberately increasing the content of an essential micronutrient, for example, vitamins and minerals (including trace elements) in food, to improve the nutritional quality of the food supply to produce a benefit to public health with a minimal risk to health". Universal iron fortification of foods certainly results in some beneficial effects in the needy population, in particular in children with nutritional deficiencies. However, should it be allowed to accelerate the evolution of cases of hemochromatosis and other illnesses involving iron overload? Numerous works have been published on this theme. The fortification of wheat flour with iron has been used in Canada, Great Britain and United States since 1940 with control of iron deficiency being provided in these countries. In recent years there has been growing interest of fortification programs in developing nations (26) . However, the fortification of foods with iron was suspended in Sweden in 1994. This was considered one of the largest fortifications in the world with the addition of 4.1 mg of iron per 100 grams of flour. At that time, Olsson et al. (27) studied 16 patients with hemochromatosis, during and after a period stopping fortification and found that there was a decrease of 0.65 mg iron absorbed per day and that the range of phlebotomies increased by 10 days. On applying these data to Brazil, a patient with hemochromatosis absorbed a total of 1996.8 mg of iron in 3072 days. Denmark also suspended the fortification of wheat flour with iron in 1987. Millman (apud Lynch (26) ) reports that this fortification had no effect on the prevalence of iron deficiency in men and in over 40-year-old women in the pre-and post-menopausal periods. Lynch (26) , in a detailed work on the risk of iron fortification and nutritional anemia arrived at the following conclusion: "The only very well-documented risk of universal fortification is an increase in the rate of accumulation of iron in individuals with the HFE phenotypic of hemochromatosis who may require more deficiency) and exacerbate another condition (acceleration of cases of hemochromatosis) which is as severe as the first. With the objective of collaborating with the health authorities we are suggesting that Resolution 344 of ANVISA should be reconsidered to find a solution that is beneficial to all segments of the population in Brazil. We suggest that a proportion of industrialized wheat flours in Brazil should be sold without the addition of iron. It would be what we would call 'semi-universal' fortification. For the Brazilian industry, which has helped us so much in this process of food fortification, this would be an excellent opportunity to diversify their products! To conclude we should remember: -every diabetic has the right to have food without glucose; -every patient with coeliac disease has the right to gluten-free food; -every patient with hypertension have the right to food without salt; Why is the patient with iron overload not entitled to food without iron?
2017-06-24T08:06:37.552Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "92f0be4e0b861d8affbde2f888465fd82a6ffab5", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3545435?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "92f0be4e0b861d8affbde2f888465fd82a6ffab5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256289564
pes2o/s2orc
v3-fos-license
Curiosity, Checking, and Knowing: a Virtue-Theoretical Perspective In his important and original book, Knowing and Checking, Guido Melchior provides advice on how to tackle skepticism. I argue that his analysis points to a possible virtue-theoretic answer to skepticism, which I call the restraint solution, i.e., activate your self-trust and restrain your inquisitiveness! It leads one to the ideal of bounded reflective curiosity: when it comes to knowledge, we should restrain our second-order, reflective curiosity and stay content with the somewhat Moorean trust in ordinary everyday beliefs. We can preserve our ordinary, first-order vigilance and investigative interest (curiosity) without falling into skeptical over-caution which is basically a reflective, second-order vicious attitude. Introduction I am happy to discuss here Guido Melchior's Knowing and Checking: An Epistemological Investigation (2019). Due to its richness and originality, I have learned a lot from it. (I will be calling the author "Guido" as I always do.) Although checking is highly relevant to epistemology, it has not been addressed much in literature before Guido's book. Therefore, he made a significant contribution with this work. Congratulations, Guido! Given the quality of the work, I will not be questioning its main claims but go along with it and try to place it within the general perspective I prefer, namely virtue epistemology. The two approaches relevant for the work are the following: (1) the general interest in epistemic rationality (what is the rational way in facing epistemic uncertainty?) and (2) a more particular interest in virtue epistemology. Next, come connections with people's expectations concerning knowledge. The main proposal here is that the context of checking introduces a very high demand on conditions of knowledge: if your method is not sensitive you cannot arrive at knowledge. This is summarized in the following principle, KSAC: In contexts of checking, when we raise the question whether p (or an alternative q) is true and deliberate about methods for settling this question, we tend to think that we do not know that p via strongly insensitive methods, especially not via monotonous methods. In other contexts, this tendency does not apply. (Ibid.: 142) 3 In other words, in the non-checking contexts, we are ready to ascribe knowledge to the subject who has not used sensitive methods, and they ground our normal ascriptions of knowledge (I accept I have knowledge that my liver is healthy although I might have some statistically super rare and hard-to-detect liver disease that I have no evidence of). When I turn to check, I start raising questions about the sensitivity of my methods (if I had a liver disease, would I have any evidence of it?). And I begin to rationally doubt my convictions. In short, too much checking blocks our normal knowledge. My second question for Guido: is this your general diagnosis of the typical checking context? If it is, it looks that your general picture of checking, which I find attractive, assumes the following: The guiding idea […] is that in some contexts of checking, e.g., when raising the question whether p is true and when deliberating about methods for settling this question, we think that we do not know that p via insensitive methods, i.e., via methods that fail to be checking methods. (Ibid.: 6-7) Then, in this context, we assume that it is rational to check whether p, in order to find out whether p. What does this tell us about the relationship between checking and knowledge? Can they be separated as sharply as you would like to? This brings us close to what you call type-2 moderate invariantism which claims that. [c]onsulting a sensitive method for o and not d/l being true is not necessary for knowing that o. (Ibid.: 172) To paraphrase the Gospel referring to Doubting Thomas: "Blessed are they that have not checked and yet have believed." The entire Chapter 3 is devoted to a detailed explication of SAC and KSAC. The main claim sounds simple: too much checking blocks our (normal) knowledge. Certainly, it is far from being simple but let us look at the Zebra case. Do you need to check the zebra exhibit to recognize that these are zebras? No! Otherwise, you would never come to know. Restrictions on checking as well as separation of knowledge and checking normally save the day. Guido praises the moderate invariantist: Consulting a checking method is necessary neither for knowing that in the pen is a zebra nor for knowing that in the pen is not a painted mule. S's observation plus background knowledge is not a method for checking that there is not a painted mule although it is a method for checking that there is a zebra. In checking contexts when we raise the question whether there is not a painted mule in the pen and deliberate about methods for settling this question, we falsely think S does not know that the animal is not a painted mule because we falsely think that knowing requires having consulted a checking method. (Ibid.: 172) Here is another quotation: [C]ontextualism and SSI defend the view that, in some contexts (when not considering the possibility of d/l), knowing that ¬d/¬l and knowing that o do not require a sensitive method, but in other contexts (when considering the possibility of d/l), they do. I think that KSAC can actually explain closure puzzles. However, I do not want to take a position about whether the verb 'knowing' really behaves the way contextualists suggest. Therefore, I want to remain neutral about whether contextualism or invariantism is true. (Ibid.:178) (The text has d for cases of deception and l for lottery cases.) However, it does face a problem. Look at the piece of news from Gaza City, Gaza Strip, Oct. 9, 2009: The Marah Land Zoo's only two zebras died of hunger earlier this year when they were neglected during the Israel-Hamas war. -Zookeepers in Gaza have found a creative way of drawing crowds to their dilapidated zoo. They have been painting their donkeys to make them look like zebras. (Thanks to Danilo Šuster for telling me about this newspaper story.) Here Doubting Thomas seems more rational: unless you have checked, you do not know whether you are dealing with zebras or with donkeys. Thus, a third question for Guido is in order: would you stay with undemanding contextualist, or would you join Doubting Thomas? Back to the book's main line of inquiry, i.e., the road from checking to knowledge or away from it. A central issue in the book is skepticism: digging too deeply will turn you into a skeptic. I would put it as a problem of uncontrolled reflective curiosity, of overdone zetetic work. Guido rightly places the issues of skepticism within the context of self-reflection. Here the skeptically-minded inquirer digs too deeply and at a wrong place, thus producing a catastrophe. This brings us to the book's central and concluding chapter which tackles the question that arises in discussing skepticism and bootstrapping: in what sense does the wish to check to go too far, thus blocking knowledge? In the process of checking, the target belief is completely isolated from the context and judged in abstracto; no wonder this leads to desperate bootstrapping and blocking of knowledge. (Cognitive scientists would say that the process looks like decoupling, i.e., separating beliefs from the context that generated them. The problems are then simply problems with such decoupling: too much and in the wrong places.) Therefore, the subject should activate her self-trust, restrain her zetetic, inquisitive curiosity, and thus avoid the skeptical threat. Checking contrasts with ordinary self-reflection, which does not go as far in reflective curiosity as checking does. The process of checking whether one's own beliefs are true differs concerning its internalist features from ordinary self-reflection. In the case of ordinary self-reflection, S believes that p believes that she believes that p, and automatically also believes that her belief that p is true, viz. without having raised the question whether her belief that p is true and without having intentionally used a method for settling this question. In this case, S believes that her belief that p is true without having checked whether her belief that p is true. In order to check, S has to perform an additional cognitive process. Moreover, in cases of ordinary self-reflection, the focus of S's attention tends to be the world […]. (Melchior, 2019: 222) In the concluding chapter of the book (Chapter 8), Guido notes several features that accentuate the contrasts between ordinary self-reflection and checking one's own beliefs. In ordinary contexts, a reflective subject typically has not only first-level beliefs about the external world, e.g., that there is a computer in front of her, but also higher-level beliefs (at least implicitly) that she has these external-world beliefs as well as beliefs (at least implicitly) that her external-world beliefs are true. These are then characteristics of the processes of ordinary self-reflection. Hence, both checking one's own beliefs and ordinary self-reflection can involve the following beliefs: • B(There is a computer in front of me) • B(I believe that there is a computer in front of me) • B(My belief that there is a computer in front of me is true). (Ibid.: 222) However, when doubting our own beliefs, we raise the question about the truth of our own beliefs and intentionally seek a method that would answer it. We do not raise this question in cases of ordinary self-reflection. This process of acquiring higher-level beliefs via doubting is neither automatic nor synchronic. Furthermore, we tend to shift our attention from the external world toward our own beliefs and the question of their truth. Thus, ordinary self-reflection and checking one's own beliefs can also be distinguished by a tendency to shift the attention from the external world to our own beliefs and their potential truth. 4 And this leads to the crucial distinction: [W]e will see that the cognitive processes of ordinary self-reflection and of checking one's own beliefs support different epistemic intuitions. Importantly, Moorean reasoning is intuitively correctly used in contexts of ordinary selfreflection but is an intuitively flawed method for checking whether our own beliefs are true. (Ibid.: 223). This will be understood as pointing to the restraint solution. In short, for Guido, ordinary self-reflection is not problematic, while checking one's own beliefs is. To put it in virtue-epistemological terms, ordinary self-reflection can go along with a virtue, while checking one's own beliefs points to a vice. A most dramatic variant of the checking problem in relation to skepticism is bootstrapping. Minimal bootstrapping, as described in the book (Ibid.: 199), is inference to the proposition p that starts from some observation that indicates that p ("I have hands"), notes the fact of indication (my sight indicates this), and concludes that observation truly indicates ("My sight is right about me having hands"). Now, our drive to check motivates us to do the bootstrapping which then leads to epistemic defeat which results in the skeptic winning. The Moorean type of proof of our knowledge concerning the external world leads to such a defeat if we take it as a checking strategy. Guido notes that "[o]ur intuitions concerning bootstrapping say that basic knowledge and knowledge via induction are plausible, but that bootstrapping does not lead to knowledge" (Ibid.: 7). He notes that these intuitions are mutually incompatible. The secret lies in the trap of checking. "Bootstrapping is an obviously insensitive method for determining whether a source is reliable and, therefore," Guido continues, it "fails to be a checking method. When deliberating about bootstrapping, we enter a context of checking in which we think that insensitive methods cannot 4 Compare Jane Friedman (2019): First, checking is inquiring. Sometimes we "check" in a thinner sense-we have the habit of jiggling the lock a few extra times or tapping our pockets when something important is in there. In some of these cases, the behaviour is more like a tic than a genuine investigation. I'm interested in the cases that involve genuine inquiry and investigation. My checkers are really trying to collect more information and are not just performing certain habitual movements or looking at the stove for any number of other (nonepistemic) reasons. I take it that typical double-checkers and triple-checkers (etc.) are genuine inquirers. (p. 85). As well as: Acts only count as acts of inquiry when they are grounded in or perhaps motivated by an inquirer's desire to know more or figure something out or understand something better. This means that genuine inquiry is an activity with an essential attitudinal component: inquirers have epistemic aims, and actions done in the service of inquiry are in part motivated by those epistemic aims. This attitudinal component of inquiry comes in many familiar forms: curiosity, wondering, contemplation, deliberation, and more. I've called these attitudes 'interrogative attitudes'. All inquirers have interrogative attitudes, i.e., a subject inquiring into Q at t has an interrogative attitude towards Q at t. Inquirers are wondering where their passports are or curious about whether they left the stove on, and so on. But the interrogative attitudes all involve suspension of judgment. (p. 88). yield knowledge. Therefore, we regard bootstrapping as an inappropriate method for acquiring knowledge about the reliability of a source." (Ibid.) This leads to an interesting discussion of Moore and raises interesting and important issues concerning the rationality of checking, which should be addressed in the future. Bounded Reflective Curiosity: a Virtue-Epistemological Perspective I want to explore the virtue-epistemological perspective to issues concerning curiosity, in a somewhat Moorean spirit, relying upon suggestions from Guido's book summarized in the previous section. 5 My background assumption is that we need both character virtues and virtues-abilities and that the former play a crucial role in matters of motivation which are of interest here (I assume that virtues-abilities are not problematic). 6 Let me note that the virtue-theoretical positive evaluation of restraint, like the one to be proposed here, can be found in Neil C. Manson's paper aptly titled "Epistemic Restraint and the Vice of Curiosity" (2012). He appears to me more critical of curiosity than I am, but we both accept that epistemic restraint can be a virtuous stance. In my book (2020) I argue that curiosity is the central motivating epistemic virtue. A human being devoid of curiosity would have little motivation to arrive at true belief and knowledge. In normal cases, it is curiosity that motivates us to gain true belief and knowledge. On the usual view of motivating virtues, this would seem to make it a virtue; since it is the main spring of motivation, we should take it as the motivating epistemic virtue. After all, wanting to know whether p gives cognizers particular instances of p (or of its negation) as particular goals and the truth as the general epistemic goal. Thus, we have a truth-focused motivating virtue: inquisitiveness or curiosity having the reliable arrival at truth as general goal. This is, I claim in my book, the core motivating epistemic virtue. Guido talks briefly about virtue-epistemological views (3.7 Checking and Knowing, SAC, and Virtue Epistemology). He does not precisely outline the nature of his normative framework. I shall simply assume that the norms of rationality guide our quest for knowledge (joining authors, such as Duncan Pritchard, who speak of "rationally grounded knowledge" as their target (Pritchard, 2016: 34), and of "rational evaluation" (Ibid.: 55ff) in epistemology) and support the evaluation in terms of virtue vs. vice. I shall also take a broader view following the good old Aristotelian paradigm with virtue in the middle and vices on both sides: Vice 1-Virtue-Vice 2. Furthermore, I will place the main ideas of Guido's picture into this paradigm. Let us start with the vices. We can contrast two extremes. The first negative extreme is epistemic rashness in inquiry, reasoning, and argumentation, which goes with gullibility, uncritical acceptance, and the like. The opposite vicious extreme is active inconfidence and misplaced mistrust. We can wonder about its motivation and causes (see below my analysis of rationality). Here, we encounter the vicious need to check. In line with Guido's characterization, we can assume that in this vicious practice too much value is ascribed to sensitivity and that it further produces another typical negative effect: a drive to bootstrap. This second, opposite negative extreme, is our topic here. Remember Guido's diagnosis of the skeptical problem: [I]n contexts of ordinary self-reflection we have external-world knowledge and knowledge that the skeptical hypothesis is false via Moorean reasoning whereas in contexts of doubting and checking our own beliefs we know none of these propositions since Moorean reasoning cannot yield knowledge in these contexts. (Melchior, 2019: 259) Now, what about the virtue in the middle? How far may we go accepting epistemic offers from our senses, intuition, other people's testimony, and so on? Guido does not name the stance required for knowledge. I would call the requisite quality scrupulosity. It goes with vigilance and investigative interest (curiosity), and it is closely connected with a desire to check but does not overdo it. Thus, my guess is that this is the virtue in the middle. Certainly, the desire to check can go too far and become strong and active mistrust which finally pushes the thinker to bootstrapping. We can wonder about its grounds and causes; this would bring us to the topic of rationality, but we cannot address it at this point. The vices mentioned here are zetetic, motivational vices that are intrinsic to epistemic structure, i.e., perversions of inquisitiveness-curiosity. The other kind, irrelevant here, are motivational vices that are external to the epistemic structure, i.e., of a more practical, less theoretical kind (for instance, those discussed by Cassam (2019) as "vices of the mind" in his book of the same title). Connections with the issues of rationality are quite close and direct. Consider the general, all-encompassing epistemic rationality (what is the rational way of facing epistemic uncertainty?) It is easy to identify the lack of rationality (or crippled rationality) in the case of the vices of rashness: there is not enough reasoning, no questioning, so a cognitive process is performed by an automatic mechanism (what is called "System One" in the tradition of heuristics and biases theories of rationality). It is also easy to see the "virtue in the middle," i.e., epistemic scrupulosity, as an exercise in rationality. Our cognitive system performs a decoupling of the target belief and its immediate, spontaneous context, and investigates it from a sufficiently wide perspective. However, the question that arises in discussing skepticism and bootstrapping is more difficult to judge: in what sense does the wish to check go too far, thus blocking knowledge? It appears similar to the aforementioned problems with decoupling: too much and in the wrong places. The target belief is completely isolated from context and judged in abstracto; no wonder this leads to desperate bootstrapping and the blocking of knowledge. Guido's book thus raises interesting and important issues concerning the rationality of checking. Let me connect them with my view of curiosity. What is in the book described as motivation to check seems to me a particular kind of curiosity. To stay with examples from the book, consider ordinary curiosity about the animal at the exhibit. The first-order curiosity asks what animal it is or whether it is a zebra. Checking happens at a higher level: if Thomas doubts and asks himself whether it is really a zebra or whether he has misperceived the animal, he climbs to a higher level. The book specifies that he intentionally uses the chosen method to find out the truth of his initial impression. However, checking is then a matter of reflective curiosity, not of simple, naïve, first-order curiosity. Redirection and restrained inquisitiveness are here the road to virtue. In particular, the epistemological solution I generally prefer is what the book calls type-2 moderate invariantism which in our case claims that Thomas's initial evidence for the zebra belief contains a sensitive method for establishing it, namely perception. However, it does not contain a method which would, in addition to the claim that the animal is a zebra, ascertain that (1) he is not being deceived and (2) that not some rare, exotic thing happened which would make a non-zebra look like a zebra (as it happened in Gaza). I agree with Guido (Melchior, 2019: 172) that consulting a sensitive method for determining the truth of the two claims is not necessary for knowing that the animal is a zebra. Furthermore, I agree that we falsely think that Thomas does not know because in contexts in which we are checking whether all the candidate propositions are true, we falsely think that consulting a sensitive method is necessary for knowing that the animal is a zebra. When Thomas turns to the checking of his zebra belief, his curiosity becomes reflective and refers to his first-level impression. If he goes too far with this attitude, his reflective curiosity will become vicious and turn him into Doubting Thomas. 7 In short, if Thomas does not ask, he knows. This I view as a link to my curiosity book (Miščević, 2020). Thus, it seems that if I do not ask, I know that I am not being deceived. And I know that if my impression is correct, I am not being deceived. Marian David has kindly asked in the discussion in what sense it holds, namely, in what sense does Thomas know all this. The answer: in an implicit, tacit way. The mental sentences "I am not being deceived" and "If I have this impression I am not being deceived" are in his belief box, in its "tacit" sub-box. In short, skepticism seems tied to excessive and wrong-headed inquisitiveness, perhaps it is even its result. Too intense checking can make irrelevant alternatives come into play, thus making them relevant. "Can I be sure this is not a painted mule?" is the crucial reflective question. Once it is raised, the desire to check arises, and it can easily go too far and turn into strong and active mistrust; it thus becomes epistemic vice. Guido writes that consulting a sensitive method for o and not d/l being true is not necessary for knowing that o, and this looks like a good diagnosis to me. Linda Zagzebski comes close to the restraint solution when discussing the virtuous self-trust: My response to skepticism is that we have the same grounds for rejecting it as we have for taking it seriously in the first place. Skepticism arises from the belief that there is a gap between the mind and the world. We have no argument for that belief, but it is natural. It is equally natural to believe the gap can be bridged. That belief, I've argued, is reasonable, and because we have that belief, we need self-trust. Self-trust is reasonable in the sense that it is unreasonable to permit reason to thwart our nature. The person who takes skepticism seriously enough to let it affect her confidence in a wide range of her beliefs, emotions, and acts is a person who permits reason to thwart her nature. It is not reasonable to do that even if the use of reason does not show us a convincing response to the infinite regress argument or Cartesian skepticism. (2009: 74) 8 What about other central examples and the corresponding kinds of problems? What are the principled differences between various kinds of examples? Take the contrast between the BIV and the Zebra case (David raised this question in the discussion). Reflective curiosity goes too far in both cases but differs in the direction. The excess of the BIV consists in its going too far into the thought-experimental scenario, while the excess of the Zebra case is in the subject's digging too deep into the ordinary scenario. In Kripke's Red barn case the contrast is between the observer's knowledge and the subject's knowledge since the subject does not know that only red barns are really barns. The judge raises the level of her second-order curiosity by projecting the observer's alternative onto the subject's mind. In high vs. low stake cases, we have a contrast between the external value or the usefulness of external stakes and the immanent value of knowledge. The drive to check normally neglects the low value of stakes, focusing on the knowledge vs. ignorance contrast. An interesting contrast is the one between deeply obvious truths and less firm candidates. Wittgenstein makes this contrast central to his account by calling the former "hinges" (ger. Angeln). He claims that doubting in their case makes no sense, that it cannot even start in normal circumstances. In our terminology, the second-order curiosity in relation to hinges is particularly vicious. Here is a quote from Wittgenstein's On Certainty (1969): 19. The statement "I know that here is a hand" may then be continued: "for it's my hand that I'm looking at". Then a reasonable man will not doubt that I know.-Nor will the idealist; rather he will say that he was not dealing with the practical doubt which is being dismissed, but there is a further doubt behind that one.-That this is an illusion has to be shown in a different way. 163. Does anyone ever test whether this table remains in existence when no one is paying attention to it? We check the story of Napoleon, but not whether all the reports about him are based on sense-deception, forgery, and the like. For whenever we test anything, we are already presupposing something that is not tested. Now am I to say that the experiment which perhaps I make in order to test the truth of a proposition presupposes the truth of the proposition that the apparatus I believe I see is really there (and the like)? 164. Doesn't testing come to an end? (Hat das Prüfen nicht ein Ende?) 9 Nonetheless, there is the possibility of disagreement: for Wittgenstein, things that cannot be checked, i.e., the hinges, cannot be objects of knowledge. For Guido and me, checking is independent of knowledge, therefore no Wittgensteinian conclusion can be reached. If we read Guido's proposal from a virtue-epistemological perspective, virtue-epistemology here parts ways with Wittgenstein: the restraint solution is not Wittgensteinian! Conclusion: the Restraint Solution and the Ideal of Bounded Reflective Curiosity In this paper, I express my agreement with the central line of the book, culminating in the restraint solution of the skeptical puzzle, and neither question nor criticize it. Instead, I propose a virtue-epistemological interpretation of the restraint 9 And here is more: 444. "The train leaves at two o'clock. Check it once more to make certain" or "The train leaves at two o'clock. I have just looked it up in a new time-table". One may also add "I am reliable in such matters". The usefulness of such additions is obvious. 445. But if I say "I have two hands", what can I add to indicate reliability? At the most that the circumstances are the ordinary ones. 446. But why am I so certain that this is my hand? Doesn't the whole language-game rest on this kind of certainty? […]. 485. We can also imagine a case where someone goes through a list of propositions and as he does so keeps asking "Do I know that or do I only believe it?" He wants to check the certainty of each individual proposition. It might be a question of making a statement as a witness before a court. 486. "Do you know or do you only believe that your name is L. W.?" Is that a meaningful question? Do you know or do you only believe that what you are writing down now are German words? Do you only believe that "believe" has this meaning? What meaning? solution and re-interpret the problem of excessive checking as the problem of unbounded reflective curiosity. Accepting the idea that rationality is guiding our quest for knowledge, I propose a simple division of epistemic virtues and vices: rationality embodies virtues in the middle and vices form the extremes: Guido's account based on the need to keep checking under control provides a fine recipe for a rational reaction to skepticism from the perspective of epistemic virtues. As mentioned above, I prefer the solution to the skeptical problem that Guido calls type-2 moderate invariantism. It assumes that in typical situations the subject's evidence contains a sensitive method for the truth of some ordinary proposition, but not for the joint truth of (1) the latter proposition, (2) the proposition that she is not being deceived, and (3) the proposition that there is no threatening lottery puzzle lurking around. I agree with the central claim of the book that consulting a sensitive method for the joint truth of the three propositions is not necessary for knowing that the first, ordinary one holds. Most importantly, I agree that people reflecting on this kind of situations falsely believe that the subject does not know that the ordinary proposition holds because, in contexts of checking whether all three are true, we falsely believe that consulting a sensitive method for ascertaining that non-deception and non-lottery propositions are true is necessary for knowing that the ordinary one holds. (Cf. Melchior, 2019: 172). This line of argument, I hope, can be applied to other important issues in epistemology, such as low vs. high stakes, closure puzzles, and the like. As I noted, Guido's analysis thus points in fact to a possible virtue-theoretic answer to skepticism, which I call the restraint solution: activate your self-trust and restrain your inquisitiveness! This idea of the restraint solution leads one to the ideal of bounded reflective curiosity: when it comes to knowing we should restrain our second-order, reflective curiosity, and stay content with the somewhat Moorean trust in ordinary, everyday beliefs. We can preserve our ordinary, first-order vigilance and investigative interest (curiosity) without falling into skeptical over-caution, which is basically a reflective, second-order vicious attitude. What counts as belonging to first-order vigilance and investigative interest? We can imagine an interlocutor appealing to Sosa's version of virtue-theoretical answer to skepticism and therefore insisting on the inclusion of reflective, second-order investigation into the struggle with skepticism. In writing on Moore, Sosa states that ordinary beliefs are acceptable on the first level but reflexively justified on the second level: None of the options considered by Moore holds much attraction to us now. What wrong turn leads to that blind alley? One mistake is to suppose that you can know about the hand only if you know you are not dreaming. You must not be dreaming, of course, but you needn't know it, not for animal knowledge. Animal knowledge of the hand requires no knowledge that it is not just a dream. So, we could just respond to the skeptic by denying what Moore is so willing to grant, as is Descartes if we believe Moore. "What is required by our perceptual knowledge of a fire we see, or a hand," we could respond, "is just that we be awake, and not that we know we are awake." (Sosa, 2009: 21) Then, he claims that this "would take us only part of the way out. For, we want a knowledge that is not just animal but also reflective. We want a knowledge that is defensible in the arena of reflection." (Ibid.) He further stresses the role of reflective knowledge in the refutation of skepticism: Although reflective knowledge requires knowledge that we are awake, fortunately, this required knowledge need not be prior knowledge. Here's why. Reflective justification is web-like, not transmissively linear. The web of belief attaches to the world through perception and memory. But each of its nodes depends on other nodes directly or indirectly. The web is woven through the rational basing of beliefs on other beliefs or experiences. There is no reason why such basing must be asymmetrical, however, no reason that precludes each belief from being based at least in part (perhaps minuscule part) on other beliefs. Each might thus derive its proper epistemic status from being based on others in a web that is attached to the world by causation through perception or memory. (Ibid.: 22) One could reply by agreeing with Sosa's main line: yes, we need the more holistic, web-like support for our everyday beliefs, and yes, this involves some reflection. However, it need not be the over-demanding, checking reflection. We need not start by doubting our everyday beliefs, we just reflectively seek some additional support for them by asking questions about the world. Remember Guido's suggestion that the cognitive processes of ordinary self-reflection and of checking one's own beliefs support different epistemic intuitions: in cases of ordinary self-reflection, the focus of S's attention tends to be the world (Melchior, 2019: 222). Importantly, Moorean reasoning is intuitively correctly used in contexts of ordinary self-reflection but is an intuitively flawed method for checking whether our own beliefs are true. (Ibid.: 223). Consider the Zebra case. I can visit the favorite zoo of my early days, the one in Zagreb, and ask myself what kind of animal the zebra-like mammal in front of me is. The question is primarily about the animal, not about my beliefs concerning it. I can then appeal to my web of belief concerning the Zagreb Zoo: I used to go there and spend hours looking at animals, and I read the local weekly magazine discussing matters related to the Zoo; there was never a scandal involving fake animals. My biology teacher never warned me of such a possibility, nor did the biology students I knew later. Thus, I can safely assume that there is a zebra in front of me. The Sosa-style web-of-belief reflection is not the exaggerated checking reflection present in skeptical scenarios. 10 The critic might argue that the difference between the reflection on the truth of some problematic proposition and the problematic reflection on one's beliefs is insignificant and, in any case, less dramatic than described here. I agree that this valid question merits further discussion. However, for the moment I stay with the optimistic view that we can safely follow the restraint solution and the ideal of bounded reflective curiosity. This line of thought could and perhaps should be developed as a new virtue-theoretic answer to skepticism, and compared, for instance, to the Wittgenstein-inspired answers that limit the scope of legitimate checking as well. Nonetheless, I hope I have shown that the virtue-theoretic re-interpretation of Guido's interesting and original line of thought is useful and promising. Funding Open access funding provided by Central European University Private University Declarations Competing Interests The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2023-01-27T16:08:22.889Z
2023-01-25T00:00:00.000
{ "year": 2023, "sha1": "f4c08bfc79741929cb552940f0bfcfecf7b08851", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12136-022-00538-9.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "57eb0c9fe05580ebbefbf3997733430344f198ba", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [] }
13381848
pes2o/s2orc
v3-fos-license
Resonant transmission of self-collimated beams through coupled zigzag-box resonators : slow self-collimated beams in a photonic crystal The resonant transmission of self-collimated beams through zigzag-box resonators is demonstrated experimentally and numerically. Numerical simulations show that the flat-wavefront and the width of the beam are well maintained after passing through zigzag-box resonators because the up and the down zigzag-sides prevent the beam from spreading out and the wavefront is perfectly reconstructed by the output zigzag-side of the resonator. Measured split resonant frequencies of twoand three-coupled zigzag-box resonators are well agreed with those predicted by a tight binding model to consider optical coupling between the nearest resonators. Slowing down the speed of self-collimated beams is also demonstrated by using a twelve-coupled zigzag-box resonator in simulations. Our work could be useful in implementing devices to manipulate self-collimated beams in time domain. © 2012 Optical Society of America OCIS codes: (230.4555) Coupled resonators; (260.2030) Dispersion; (260.5950) Selffocusing; (230.5298) Photonic crystals. References and links 1. H. Kosaka, T. Kawashima, A. Tomita, M. Notomi, T. Tamamura, T. Sato, and S. Kawakami, “Self-collimating phenomena in photonic crystals,” Appl. Phys. Lett. 74, 1212–1214 (1999). 2. P. T. Rakich, M. S. Dahlem, S. Tandon, M. Ibanescu, M. Solja čiv́, G. S. Petrich, J. D. Joannopoulos, L. A. Kolodziejski, and E. P. Ippen, “Achieving centimetre-scale supercollimation in a large-area two-dimensional photonic crystal,” Nat. Mater. 5, 93–96 (2006). 3. Z. Lu, S. Shi, J. A. Murakowski, G. J. Schneider, C. A. Schuetz, and D. W. Prather, “Experimental Demonstration of Self-Collimation inside a Three-Dimensional Photonic Crystal,” Phys. Rev. Lett. 96, 173902 (2006). 4. S.-H. Kim, T.-T. Kim, S. S. Oh, J.-E. Kim, H. Y. Park, and C.-S Kee, “Experimental demonstration of selfcollimation of spoof surface plasmons,” Phys. Rev. B 83, 165109 (2011). 5. D. Chigrin, S. Enoch, C. Sotomayor Torres, and G. Tayeb, “Self-guiding in two-dimensional photonic crystals,” Opt. Express11, 1203–1211 (2003). 6. X. Yu and S. Fan, “Bends and splitters for self-collimated beams in photonic crystals,” Appl. Phys. Lett. 83, 3251–3253 (2003). 7. D. W. Prather, S. Shi, D. M. Pustai, C. Chen, S. Venkataraman, A. Sharkawy, G. J. Schneider, and J. Murakowski, “Dispersion-based optical routing in photonic crystals,” Opt. Lett. 29, 50–52 (2004). #161840 $15.00 USD Received 20 Jan 2012; revised 14 Mar 2012; accepted 15 Mar 2012; published 26 Mar 2012 (C) 2012 OSA 9 April 2012 / Vol. 20, No. 8 / OPTICS EXPRESS 8309 8. S.-G. Lee, S. S. Oh, J.-E. Kim, H. Y. Park, and C.-S. Kee, “Line-defect-induced bending and splitting of selfcollimatedbeams in two-dimensional photonic crystals,” Appl. Phys. Lett. 87, 181106 (2005). 9. T.-T. Kim, S.-G. Lee, S.-H. Kim, J.-E Kim, H. Y. Park, and C.-S. Kee, “Asymmetric Mach-Zehnder filter based on self-collimation phenomenon in two-dimensional photonic crystals,” Opt. Express 18, 5384–5389 (2010). 10. T.-T. Kim, S.-G. Lee, S.-H. Kim, J.-E Kim, H. Y. Park, and C.-S. Kee, “Ring-type Fabry-Perot filter based on the self-collimation effect in a 2D photonic crystal,” Opt. Express 18, 17106-17113 (2010). 11. Z. Li, H. Chen, Z. Song, F. Yang, and S. Feng, “Finite-width waveguide and waveguide intersections for selfcollimated beams in photonic crystals,” Appl. Phys. Lett. 85, 4834–4386 (2004). 12. A. Taflove,Computational Electrodynamics : The Finite-Difference Time-Domain Method (Artech House, Boston, 1995). 13. http://ab-initio.mit.edu/wiki/index.php/Meep. 14. S.-G. Lee, J.-S. Choi, J.-E. Kim, H. Y. Park, and C.-S. Kee, “Reflection minimization at two-dimensional photonic crystal interfaces,” Opt. Express 16, 4270–4277 (2008). 15. T.-T. Kim, S.-G. Lee, M.-W. Kim, H. Y. Park, and J.-E. Kim, “Experimental demonstration of reflection minimization at two-dimensional photonic crystal interfaces via antireflection structures,” Appl. Phys. Lett. 95, 011119 (2009). 16. C.-S. Kee and H. Lim, “Coupling characteristics of localized photons in two-dimensional photonic crystals,” Phys. Rev. B67, 073103 (2003). 17. A. Yariv, Y Xu, R. K. Lee, and A. Scherer, “Coupledresonator optical waveguide: a proposal and analysis,” Opt. Lett. 24, 711–713 (1999). 18. M. Bayindir, B. Temelkuran, and E. Ozbay, “Tight-binding description of the coupled defect modes in threedimensional photonic crystals,” Phys. Rev. Lett. 84, 2140–2143 (2000). 19. T. F. krauss, “Why do we need slow light?” Nat. Photon. 2, 448–450 (2008). 20. T. Baba, “Slow light in photonic crystals,” Nat. Photon. 2, 465–473 (2008). 21. J. B. Khurgin, “Slow light in various media: a tutorial,” Adv. Opt. Photon. 2, 287–318 (2010). 22. E. Ozbay, A. Abeyta, G. Tuttle, M. Tringides, R. Biswas, C. T. Chan, C. M. Soukoulis, and K. M. Ho, “Measurement of a three-dimensional photonic band gap in a crystal structure made of dielectric rods,” Phys. Rev. B 50, 1945–1948 (1994). 23. M. Notomi, E. Kuramochi, and T. Tanabe, “Large-scale arrays of ultrahigh-Q coupled nanocavities,” Nat. Photonics2, 741–747 (2008). Introduction A self-collimation phenomenon, diffractionless propagation of a light beam in photonic crystals (PCs), has been of great interest in recent years because it could provide a new way to manipulate light propagation in PCs [1][2][3][4][5].It has been experimentally demonstrated that selfcollimated beams can be well guided in PCs without the use of any physical boundary and effectively routed by employing the bends and splitters [6][7][8][9][10].Moreover, self-collimated beams can be crossed without cross talk [11], hence optical devices based on the self-collimated beams have intrinsic potential for high density photonic integrated circuits. A resonator has been of fundamental and practical interest in optical devices such as cavities, waveguides, filters, couplers, and so on.A mirror is necessary to make a resonator.A 45-degree mirror to give rise to total internal reflection of a self-collimated beam has been proposed [6,8].However, it is difficult to make a resonator composed of 45-degree mirrors because the beams reflected from 45-degree mirrors can hardly make destructive interference.Thus, to realize a resonator of self-collimated beams, it is necessary to make a high quality mirror to reflect strongly self-collimated beams into a backward direction. Designing a mirror of self-collimated beams can start from reviewing fundamental properties of a self-collimated beam.One of the properties is that a self-collimated beam has a flat-wavefront perpendicular to a propagation direction.One can reasonably conjecture that an appropriate geometry to break the flat-wavefront may cause strong back-reflection of selfcollimated beams.A zigzag-shape line-defect can be one of structures to break the flatwavefront of a self-collimated beam.We will show that a zigzag-shape line-defect can reflect strongly incident self-collimated beams into a backward direction and act as a practical mirror of self-collimated beams. A self-collimated beam has an uncertainty of a wavevector along a direction perpendicular to the beam propagation direction due to the finite beam size.The uncertain wavevector components will make the beam spread out along the direction perpendicular to the propagation direction in the resonator.However, the resonant phenomenon requires that the beam size of an incident self-collimated beam should be preserved after passing through a resonator.In addition, for no spreading of the beam, an incident self-collimated beam broken by one side of a resonator should be reconstructed by another one.Thus, a resonator composed of zigzag-shape line-defects should be designed carefully and have a two-dimensional shape, like a zigzag-box. In this paper, we experimentally and numerically demonstrated the resonant transmission of self-collimated beams through a designed zigzag-box resonator.Numerical simulations show that the flat-wavefront and the width of the beam are well maintained after passing through a zigzag-box resonator because the up and the down zigzag-sides prevent the beam from spreading out and the wavefront is perfectly reconstructed by the output zigzag-side of the resonator.We have also investigated resonant transmission characteristics of two-and threecoupled zigzag-box resonators.Split resonant frequencies of the coupled resonators are well agreed with those predicted by a tight-binding (TB) model to consider optical coupling between the nearest resonators.Slowing down the speed of self-collimated beams was also demonstrated by using a twelve-coupled zigzag-box resonator in simulations. Results and discussion It has been demonstrated that microwave experiments are very useful in testing properties of newly designed PC devices before applications in infra-red or visible wavelength ranges.In this study, we employ a square lattice PC composed of cylindrical alumina rods with dielectric constant of 9.7 in air.The lattice constant and radius of the rods are a = 5 mm and r = 0.4 a = 2 mm, respectively.Two parallel aluminum plates with periodically drilled holes are used to hold the alumina rods vertically.E-polarized microwaves (the electric-field parallel to the rod axes) of frequencies around 12.5 GHz can propagate with almost no diffraction along the ΓM direction inside the PC [9,10].An HP 8720C network analyzer and two horn antennas are uesed in experiments.Numerical simulations are performed by using a finite-difference timedomain (FDTD) method with a perfectly matched layer absorbing boundary condition [12].A freely available FDTD software package MEEP [13] was employed.The spatial resolutions are △x = △y = a/32.The discrete time step is set to △t = S△x/c, where the Courant factor S is chosen to be 0.5 for stable simulations.To obtain transmission spectra, a Gaussian pulse with a waist of w = 4 a is launched into the PC and the transmitted power P t (ω) is computed at the end of the PC.P t (ω) is normalized to an incident power P i (ω) calculated at near a source plane without the PC structure.Antireflection structures are employed to eliminate unwanted reflection at the PC-air interfaces [14,15].We first investigated transmission properties of self-collimated beams through a zigzag-shape mirror (ZSM) created by missing rods as shown in the inset in Fig. 1.One can clearly see that the flat-wavefront of a self-collimated beam cannot be maintained at the zigzag-shape PC-air interface and the beam is strongly reflected into a backward direction.The flat-wavefront of the reflected beam gets distorted as the height of the zigzag step increases and the reflectance decreases.When the length of the mirror increases, the flat-wavefront of the reflected beam is well maintained and the reflectance increases slightly.Figure 1 represents measured and simulated transmission spectra of self-collimated beams through a ZSM in a frequency range from 12.1 to 12.9 GHz.Measured (simulated) transmittance of self-collimated beams through the ZSM is less than 9 % (3 %), and thus the proposed ZSM can act clearly as a back-reflection mirror for self-collimated beams. We next designed a two-dimensional zigzag-box resonator as shown in Fig. 2(a).Measured (simulated) transmission spectra plotted in Fig. 2(b) shows the resonant transmission of a selfcollimated beam of the frequency f 0 = 12.575 GHz (12.559GHz).The measured frequency is slightly lower than the simulated one.The discrepancy between them may come from the small uncertainty of dielectric constant of the alumina rod since the simulated frequency was matched to the measured one when the dielectric constant of the alumina rod is 9.68, which is slightly smaller than 9.7.The Q-factor estimated from the relation of f 0 /∆ f in the measurement (the FDTD simulation), where ∆ f is a full width half maximum, is about 607 (718).The simulated electric-field distribution of the beam at f 0 = 12.559 GHz represented in Fig. 2(c) shows that the flat-wavefront and the width of the resonant self-collimated beam are well maintained after passing through the zigzag-box resonator because the up and the down zigzag-sides prevent the beam from spreading out and the wavefront is perfectly reconstructed by the output zigzag-side of the resonator. We also investigated coupled zigzag-box resonators to exhibit multi-resonant transmission peaks due to the evanescent coupling between individual resonator modes.Figure 3(a) shows the measured and the simulated transmission spectra of self-collimated beams through a twocoupled zigzag-box resonator with the inter-distance a = 15 √ 2 mm.The measured (simulated) resonant frequencies are Ω 1 = 12.518 GHz (12.502GHz) and Ω 2 = 12.630 GHz (12.615GHz).The simulated electric-field distributions of the resonant modes with resonant frequencies of Ω 1 and Ω 2 are represented in Fig. 3 (b) and 3(c), respectively.One can see that the resonant mode with Ω 1 (Ω 2 ) mimics an odd (even) symmetry mode, even though it has been known that a coupling between two identical resonant modes in coupled high dielectric cavities makes their frequency split into a lower frequency of even mode and a higher frequency of odd mode.Kee and Lim have demonstrated that the parity of split resonant modes in two-coupled resonators in a 2D PC could be switched due to the correlation of the inter-distance between resonators and the period of oscillatory decaying evanescent fields of resonant modes.[16]. It is well-known that a tight binding model is very useful in predicting resonant transmission characteristics of coupled resonant systems.According to the TB model under the consideration of the interaction between the nearest resonators only, two resonant frequencies of a two-coupled resonator are given by Ω 2 1,2 ≃ f 2 0 (1 ± β )/(1 ± α), where α and β are TB parameters and f 0 is a resonant frequency of single resonator.The physical meanings of the TB parameters are described in detail in Ref. [17].When the number of resonators increases, a transmission band is formed due to the evanescent coupling of individual resonant modes.Physical quantities such as the bandwidth ∆ f , the dispersion relation f (k) and the group velocity v g (k) of propagation modes are determined by coupling coefficient κ between the nearest resonators, ∆ f ≃ 2 f 0 |κ|, f (k) ≃ f 0 [1 + κ cos(ka)] and v g (k) ≃ −2π f 0 aκ sin(ka), where the coupling coefficient is defined as κ = β − α [17,18]. From the measured (simulated) resonant frequencies f 0 , Ω 1 and Ω 2 , we obtained the TB parameters and coupling coefficient α = −0.0085(0.0022), β = −0.0174(−0.0067), and κ = −0.0089,respectively.From the TB parameters, one can expect that the group velocities of selfcollimated beams should be less than 2π f 0 a|κ| ≃ 0.05 c in the whole transmission band (∆ f ≃ 0.22 GHz from 12.45 to 12.67 GHz) and approach to zero at band edges, provided that the TB model is valid.To check the validity of the TB model of the coupled zigzag-box resonator system for self-collimated beams, we compared theoretical resonant frequencies of a threecoupled zigzag-box resonator predicted by the TB model, ), with the measured (simulated) three resonant frequencies obtained from transmission spectra.Table 1 shows that the measured (simulated) three resonant frequencies well coincide with the frequencies predicted by the TB model.Hence, slowing down self-collimated beams could be possible by using coupled systems composed of numbers of zigzag-box resonators.In recent years, slowing the speed of light down to a remarkably low velocity has been of great interest due to its potential applications such as strong light-matter interactions, optical delay lines, optical buffers, and optical storage [19][20][21]. Table 1.Three resonant frequencies of a three-coupled resonator obtained from measured (simulated) transmission spectra.TB mea.(TB sim. ) frequencies were calculated by the measured (simulated) TB parameters obtained from the measured (simulated) two resonant frequencies of a two-coupled resonator.The unit of frequency is GHz.Measured To verify the slow propagation of self-collimated beams, we investigated the transmission properties of self-collimated beams through a couple system composed of twelve zigzag-box resonators.The consecutive resonators were made in a PC of length L pc = 270 √ 2 mm and the total length of the resonator region is L res = 195 √ 2 mm.As shown in Fig. 4(a), the simulated transmission band extending from 12.45 GHz to 12.67 GHz agrees well with the band predicted by the TB model.To obtain the group velocity of light in a media, it is essential to find the dispersion relation because the group velocity of light is given by v where k is a wave vector in the media. In this study, the dispersion relation of the slow light modes has been obtained from the fact that the total phase difference ∆φ between a light propagating through a PC with thickness L and air is given by (k pc −k air )L, where k pc and k air are wave vectors in a PC and air, respectively [22].After phase determinations for three different cases, (1) φ air ( f ) for air, (2) φ pc ( f ) for a PC without a resonator and (3) φ res ( f ) for a PC with the twelve-coupled zigzag-box resonator, Figure 4(b) shows the dispersion relations obtained from the phase calculations (black-thick line) and the TB model (red-thin line) in the transmission band.Overall tendency of the measured dispersion relation is in good agreement with the calculated one by the TB model.The step-like behavior of the measured dispersion relation comes from the finite number of cavities.In the simulations, we first recorded the time varying electric-fields at the end of the PC and then performed Fast Fourier Transforms to obtain the phase values. The group velocities of self-collimated beams were obtained by taking the derivative of the calculated dispersion relation, v g ( f ) = 2π/[dk( f )/d f ] and the results were plotted in Fig. 4(c) with the theoretical curve from the TB model.The calculated group velocities (black-solid square) oscillate around the theoretical curve (red-thin line) and has local minimum values at resonant frequencies where the transmission exhibits peak values.The oscillatory behavior of the group velocity will disappear and the group velocity curve will be close to the theoretical curve, if the number of cavities becomes infinite [23].The group velocity noticeably decreases to values less than c/100 as a frequency approaches to edges of the transmission band. It would be challengeable to make zigzag-box resonators in 2D PC slabs operating at optical frequencies, even though we demonstrated their performances in a microwave range.The coupled zigzag-box resonators fabricated in optical wavelength scales could be useful in designing devices to manipulate self-collimated beams in time domain such as delay lines and optical storages in optical communications. Conclusion In conclusion, the resonant transmission of self-collimated beams through the zigzag-box resonator was demonstrated experimentally and numerically.Using FDTD simulations, we showed that the flat-wavefront and the size of the resonant self-collimated beam are preserved after passing through the box resonator.We have investigated multi-resonant transmission characteristics of two-and three-coupled zigzag-box resonators and analyzed the split frequencies by using the TB model.Slowing down the speed of self-collimated beams demonstrated by using a twelve coupled zigzag-box resonator could be useful in implementing devices to manipulate self-collimated beams in time domain. Fig. 1 .Fig. 2 . Fig.1.Experimental and FDTD simulated transmittances of self-collimated beams through the zigzag-shape mirror in a range of frequency from 12.1 to 12.9 GHz.An inset depicts a zigzag-shape mirror to reflect strongly an incident self-collimated beam with a frequency of 12.5 GHz. #Fig. 3 . Fig. 3. Transmission spectra of self-collimated beams through a two-coupled zigzag-box resonator with two resonant frequencies of Ω 1 and Ω 2 (a).Black-thick and red-thin lines indicate experimental and FDTD simulated results, respectively.The simulated electricfield distributions of the resonant modes with resonant frequencies of Ω 1 (b) and Ω 2 (c). Fig. 4 . Fig. 4. (a) Transmission spectrum of self-collimated beams through a twelve-coupled zigzag-box resonator.(b) Dispersion relations of the transmission band obtained from the phase calculations (black-thick line) and the TB model (red-thin line).(c) Group velocities obtained from the FDTD simulations (black-solid rectangle) and the TB model (red-thin line).Dashed vertical lines represent the resonant frequencies of a twelve-coupled zigzagbox resonator.
2017-11-02T03:07:27.926Z
2012-04-09T00:00:00.000
{ "year": 2012, "sha1": "d1e482e83b998f04cbb98e3d76d87f789e9ca804", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.20.008309", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "d1e482e83b998f04cbb98e3d76d87f789e9ca804", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
269048884
pes2o/s2orc
v3-fos-license
Case method learning with multicultural approach: The implementation to increase students historical empathy and love for the country : This research aims to increase historical empathy and love for the student’s country in the history education study program. The case method model was chosen as a solution to answer problems in the field related to the low historical empathy and love for the country of students which is visible during the lecture process. This research is classified as classroom action research which is divided into 3 cycles. In each cycle is divided into four stages, there are plan of action, implementation, observation and reflection. The data collection techniques used were observation, interviews, documentation and literature study. Based on the research results, it can be seen that the case method learning model with a multiculturalism approach in contemporary Indonesian history courses can increase students' historical empathy and love for their country. Students' historical empathy and love for the country have increased in each cycle. The percentage of historical empathy obtained in the cycle is 20.6, in cycle II is 27.6 and in cycle three increased to 35.8. Meanwhile, for love of the country in cycle I, the percentage obtained was 69.6, cycle II 74.2 and cycle III 79.8. INTRODUCTION Humans are social creatures who cannot live alone without the help of others.In the 21st century, technological advances are developing rapidly, causing the depletion of empathy in each individual reduce each other .The decline in empathy also occurred among students, one of which occured at Jambi University, especially at history education study program.This can be seen in students when explaining historical material through problem analysis does not provide deep meaning, they tend to be individualistic so that the noble values of humanity and society fade from life, such as helping, kinship, cooperation, togetherness and concern for others (Utami, 2019). Based on observations during lect case method learning models, students of the Jambi University History Education Study Program only focus on mastering the material but ignore the historical empathy skills that must also be mastered by students so that they have a sense of love for the country.Empathy is the skill of reliving historical thoughts in one's mind, or the ability to see the Historical empathy is the ability to revive historical thoughts in one's mind.In other words, the ability to see the world as people in the past saw it without imposing current values onto the past.This shows that educational researchers and historians have seen, discussed, and studied the meaning of historical empathy at both theoretical and practical levels (Yilmaz, 2007). Historical empathy refers to the ability to emotionally understand and experience, as well as better contextualize, the life experiences of historical figures.What this means is that when we read a text about the suffering of a historical figure, we not only remember the facts intellectually, but we can also understand them more deeply.A deep understanding of how people feel, think, act, and the reasons behind those actions, as well as the consequences they face in a historical context (Afriani et al., 2022;Elbay, 2022).Therefore, it can be concluded that historical empathy is how we can position ourselves in the position of a historical figure in the past.Why is historical empathy important to instill in students as prospective history teachers, this is so that we can better appreciate the fact that the past consists of a vast collection of people's experiences and is full of wisdom.As is known, history is an interpretation of the past and through historical empathy, we can better gauge the true nature of the past. The decline of empathy among students must be found a solution, one way is to instill historical empathy in them through history lectures, one of which is in the contemporary Indonesian history course (Christensen et al., 1991).The contemporary Indonesian history course is one of the courses in the History Education Study Program which has the first goal, so that students have knowledge and sharp analysis related to historical events in the contemporary period.Secondly, students are of course required to have empathy skills (historical empathy) to be able to have historical sensitivity to historical events studied to be able to provide meaning to their students when they have entered the workforce, especially when they become teachers.Why historical empathy is petding to be cultivated in prospective history teacher students, it allows us to better appreciate the fact that the past consists of a vast collection of people's experiences (Elbay, 2022). The low sense of empathy of students has an impact on the decline in love for the country, 21 st -century students tend to like foreign cultures such as K-POP, accessories from Korea and Western countries.They tend to know less about regional songs and traditional regional cuisine, and students are now trapped in an apathetic and hedonistic culture, shown by the increasingly waning love for the country for the nation's next generation (Results of interviews with students).Love for the country can be defined as feeling of pride, a sense of belonging, a sense of appreciation, a sense of respect and loyalty possessed by each individual as seen from the attitude of being willing to sacrifice, protect others, love their culture (Wisnarni, 2017). Love for the country includes three aspects, namely love for the place and environment, love for the authority or government as the person who has the authority to regulate life together and love for ideas or ideals (Tridiatno & Suryanti, 2021).Love for the country is a feeling of love for one's own nation and country with an attitude of being willing to make sacrifices for the sake of one's nation and country, mutual respect in daily life both in the family environment, community schools (Nur'insyani & Dewi, 2021).Love for one's homeland is an awareness of the actual and potential membership of the entire nation working together to achieve defence, devoting integrity, identity, strength and prosperity to the nation with a national spirit (Hanifa et al., 2022).The benefits of love for the country for the Indonesian people, especially first-time students, reminds us of the struggle of the heroes for the Indonesian nation.Second, provide security and peace wherever you are because you have an attitude of respect.Third, the country is getting stronger and making progress.Fifth, it can foster an attitude of nationalism and a willingness to make selfsacrifice (Amalia et al., 2020). This condition needs to be found a solution.One of which is through the case method model with a multicultural approach.The multicultural approach is the right tool to bring students as learners who have historical sensitivity and have an impact on the sense of historical empathy (Supriatin & Nasution, 2017).This case method model is a suitable model to increase students' sense of historical empathy because in this model invites analysis of cases and problems given by lecturers or students can find it themselves.Through this model, the lecturer becomes a student facilitator in the case solving discussion process and monitors students in delivering the results of their discussions with the group (Webb et al., 2005).The implementation of this case method model is assisted by a multicultural approach.The multicultural approach is an approach that brings together groups from various cultures, with the aim of introducing differences from each culture so that it is suitable for use in contemporary history courses (Susanto & Purwanta, 2022).The multicultural approach is the right tool to bring students as learners to have historical sensitivity and impact on a sense of historical empathy and love for the country. METHOD The Research on case method learning with a multicultural approach to Increase historical empathy and Love for students' country taking the contemporary Indonesian History Course at the FKIP Jambi University History Education Study Program located in Mendalo Darat, Muaro Jambi Regency, totalling 28 students.This research uses class action research with three cycles.The research design consists of four steps: planning, action, observation and reflection (Baransano et al., 2017;Susilo et al., 2022). The first data collection technique uses observation sheet instruments and questionnaires.The questionnaire used consisted of historical empathy and love of the country questionnaires.Second, interviews in this study were used to measure the success of the research (Indiana University, 2005).The interviews were conducted with Class of 2020 history education students, observers, lecturers in charge of the contemporary Indonesian history course.Third, documentation is a complement to observations and interviews in this study.Documentation in this study will serve as a recording tool to describe what happened in the lecture room during the contemporary Indonesian history lecture in the context of class action research.Fourth, field notes in this study can describe and reflect. To analyze data using qualitative and quantitative.The qualitative technique is used to get an overview during the lecture process, both the implementation of the action plan and the obstacles experienced (Sanjaya, 2015).While quantitative techniques are used to see the increase in historical empathy and love for students' homeland with the case method model by integrating multiculturalism approaches. Data processing results historical empathy Historical empathy of students was observed during the study and assessed based on the assessment rubric.The assessment aspects include 1) tolerance, 2) equality, 3) democracy, 4) harmony, and 5) mutual cooperation.The following describes the acquisition of the value of historical empathy.1, it can be concluded that the historical empathy of students in cycle I, the group that gets the best score is group 1 with a score of 25, while in cycle II the group that obtained the best score was group 5 with a score of 34.As for cycle III, the group that obtained the best score was group 6, with a score of 41. Data processing results love for the country Data on students' love for their country from the case method model learning with a multicultural approach is obtained from a questionnaire distributed to students.The results are shown in Table 2. Referring to Table 2, it is revealed that overall students' sense of patriotism has increased.In cycle 1 the average feeling of love for the country was 69.6, in cycle II it increased by 6% to 74.2; then in cycle III it increased by 8% to 79.8.Based on the findings of this research, it indicates that students' feelings of love for their homeland have increased in each cycle.This is because students are already familiar with the case method model.The multiculturalism approach upholds diversity in the classroom with indicators of tolerance, equality, democracy, harmony and mutual cooperation.The history education class is a diverse class, with students different religious and cultural backgrounds.So that students can respect each other when there are differences of opinion when expressing opinions and can discuss openly. DISCUSSION The case method learning model with a multicultural approach can be applied well in the classroom.In the case method, learning becomes more meaningful because it has several characteristics such as: 1) a partnership between students and educators as well as between students, 2) more effective contextual learning and long-term retention, 3) involving students' trust in finding answers, 4) answer questions not only "how" but "why", 5) provide opportunities for students to analyze problems and see from various perspectives (Bruner, 2002). By integrating a multicultural approach to learning contemporary Indonesian history through the case method model, it can increase students' historical empathy.This is because in the learning process students are asked to examine contemporary cases in Indonesia from various elements through a multicultural approach, so a sense of empathy will grow towards them (Ys et al., 2020).historical events that have occurred and can take lessons or lessons from every event that has occurred.A multicultural approach is a process of developing the potential of all humans to appreciate heterogeneity as a result of cultural, racial, ethnic and religious diversity (Ibrahim, 2013). The case method learning model with a multiculturalism approach is suitable for increasing students' historical empathy and love for the country (Liza, 2020).With the case method, students of the history education study program have high curiosity about the news and case studies given.This encourages students to be able to provide in-depth analysis and relate existing cases to lecture material (Sinambela, 2017).The multiculturalism approach used in this model upholds diversity in the classroom with indicators of tolerance, equality, democracy, harmony and mutual cooperation (Nuriana et al., 2020).The history education class is diverse, with students from different religious and cultural backgrounds.So that students can respect each other when there are differences of opinion when expressing opinions and can discuss. Students who are exposed to case studies develop their emotions and attitudes and the dominant emotion is empathy (Heiney et al., 2019).Empathy can be developed through the case study process (Mennenga et al., 2016).Students need to have knowledge of the past (context) and understand why one thing has caused another to happen (Bartelds et al., 2020).To find out, they need to do a case study to find cause and effect.Meanwhile, to anticipate and minimize discrimination and social conflict between people due to Indonesian multiculturalism, it is important to implement multicultural education from an early age so that a sense of nationalism or love for one's country continues to develop (Sudargini & Purwanto, 2020).Thus, the combination of the case method and multicultural approach in this research has helped develop students' historical empathy and ultimately their love for the country has also increased. Students' historical empathy in this research was built through their case studies of history in Indonesia.Some of the history that was asked to be studied related to cases involving missions in several regions for the separatist movement for independence and stories of Indonesian citizens who could not return to their homeland because of the ban. Students are asked to study this case to understand and improve their emotions related to empathy.This activity also ultimately increased their love for the country.Through education, including higher education, it is necessary to develop students' love of the homeland and (Aini & Efendi, 2019;Suparjan, 2019), for example, understanding and studying history (Anggoro et al., 2020) because understanding is important (Rumbekwan et al., 2018).Understanding and imagining knowledge of historical content plays a major role in the stage of achieving high historical empathy (Ladjaharun et al., 2022).A greater understanding of the history of national movements contributes to a student's love of the homeland (Akbar et al., 2017). The exposure of the case method learning model with a multiculturalism approach to increase historical empathy and love for the country has a positive impact on history education students, especially in the course of contemporary Indonesian history.The purpose of this application is to increase historical empathy and love for the country.As for the implementation of this model, of course, it cannot be separated from several obstacles.Among them are 1) students are not accustomed to analyzing a case study associated with lecture material; 2) uneven division of tasks, so that work is only focused on one or two people; 3) time management that is still not effective. As for the solutions, of course, there must be synergy between lecturers teaching contemporary Indonesian history courses and students.There are rules that need to be agreed upon so that the lecture process becomes more conducive, such as the policy of using gadgets in the classroom.With this case method model, in the future it is hoped that students can get used to analyzing cases and can relate them to lecture material.The multiculturalism approach is also one of the solutions in the midst of the diversity of students in the classroom, so that it can foster harmony and increase historical empathy and love for students' homeland. CONCLUSION Based on the description of the results and discussion, it can be concluded that the results of class action research on implementing case method models with a multiculturalism approach increase historical empathy and love for the homeland of history education study program.Students can conclude that the steps of planning lectures with case method models with a multiculturalism approach to increase historical empathy and love for the homeland of history education study program students are designing and compiling case method-based lesson plans, preparing materials, learning media and choosing lecture methods, namely discussions.Another thing that was prepared was the research instrument.The implementation stage of the case method model with a multiculturalism approach to increase historical empathy and love for the homeland of history education study program students consists of three cycles.Each cycle consists of four stages: planning, action implementation, observation and analysis and reflection.In each cycle, it was observed with a multiculturalism approach how historical empathy during the discussion process.The use of the case method model in the contemporary Indonesian history course in the history education study program to increase historical empathy and love for the homeland of students has increased in each cycle.Historical Empathy In cycle I, the percentage was 20.6; in cycle II, the percentage was 27.6; and in cycle three, the percentage increased to 35.8.As for love of country in cycle I, the percentage obtained was 69.6, cycle II 74.2 and cycle III 79.8.This figure shows an increase in each cycle.The obstacle in using the case method model in the contemporary Indonesian history course in the history education study program to increase historical empathy and love for the homeland of students is that students still have difficulty in analyzing a case related to lecture material. Table 2 . Students love for the country
2024-04-12T15:40:59.983Z
2023-11-08T00:00:00.000
{ "year": 2023, "sha1": "61f9650050b1cc09d0e485a56a106b108dcbd99b", "oa_license": "CCBYSA", "oa_url": "https://jurnal.unipa.ac.id/index.php/jri/article/download/256/208", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "62e38fb8c70ec6c8209c09015c0b0d78eb43c62d", "s2fieldsofstudy": [ "Education", "History" ], "extfieldsofstudy": [] }
17664922
pes2o/s2orc
v3-fos-license
On rack cohomology We prove that the lower bounds for Betti numbers of the rack, quandle and degeneracy cohomology given by Carter, Jelsovsky, Kamada, and Saito, are in fact equalities. We compute as well the Betti numbers of the twisted cohomology introduced by Carter, Elhamdadi, and Saito. We also give a group-theoretical interpretation of the second cohomology group for racks. Introduction A rack is a pair (X, ⊲) where X is a set and ⊲ : X × X → X is a binary operation such that: (1) The map φ x : X → X, φ x (y) = x ⊲ y, is a bijection for all x ∈ X, and (2) x ⊲ (y ⊲ z) = (x ⊲ y) ⊲ (x ⊲ z) ∀x, y, z ∈ X. It is easy to show that (X, ⊲) is a rack if and only if the map R : X 2 → X 2 given by R(x, y) = (x, x ⊲ y) is an invertible solution of the quantum Yang-Baxter equation R 12 R 13 R 23 = R 23 R 13 R 12 . Racks have been studied by knot theorists in order to construct invariants of knots and links and their higher dimensional analogs (see [CS] and references therein). A basic example of a rack is a group with the operation x ⊲ y = xyx −1 (or, more generally, a conjugation invariant subset of a group). Several years ago, Fenn, Rourke and Sanderson [FRS] proposed a cohomology theory of racks. Namely, for each rack X and an abelian group A, they defined cohomology groups H n (X, A). This cohomology is useful for knot theory and also, as was recently found, for the theory of pointed Hopf algebras [G]. There have been a number of results about this cohomology [LN, M, CJKS], in particular it was shown in [CJKS] that for a finite rack X and a field k of characteristic zero, the Betti numbers dim H n (X, k) are bounded below by |X/ ∼ | n , where ∼ is the equivalence relation on X generated by the relation z ⊲ y ∼ y ∀y, z ∈ X. The equality was anticipated in [CJKS], and proved in a number of cases [LN, M], but not in general. The main result of this paper implies that the Betti numbers of a finite rack are always equal to |X/ ∼ | n . The proof is based on a group-theoretical approach to racks, originating from the works [LYZ], [S] on settheoretical solutions of the quantum Yang-Baxter equation. Namely, we use the structure group G X and the reduced structure group G 0 X of a rack X considered in [LYZ, S]. We also give a group-theoretic interpretation of the second cohomology group H 2 (X, A), which is used in the theory of Hopf algebras. Namely, we show that this group is isomorphic to the group cohomology H 1 (G X , Fun(X, A)), where Fun(X, A) is the group of functions from X to A. This is a relatively explicit description, since it is shown by Soloviev [S] that for a finite rack X, the group G X is a central extension of the finite group G 0 X by a finitely generated abelian group. Thus the cohomology of G X can be studied using the Hochschild-Serre sequence. The group G X acts on X from the left by ⊲. Consider the quotient G 0 X of G X by the kernel of this action, i.e. the group of trasformations of X generated by x⊲. This group is called the reduced structure group of X. Remark 2.2. The groups G X , G 0 X were studied by Soloviev [S] (we note that in his work, racks are called "derived solutions"). In particular, he showed that the category of racks is equivalent to the category of quadruples (G, X, ρ, π), where G is a group, X a set, ρ : G × X → X a left action, and π : X → G an equivariant mapping (where G acts on itself by conjugation), such that π(X) generates G and the G-action on X is faithful. Namely, the quadruple corresponding to X is simply (G 0 X , X, ρ, π), where ρ and π are obvious. Now let us define rack cohomology. Let X be a rack. Let G X be its structure group. Let M be a right G X -module. We define a cochain complex (C • (Here X 0 is a set of one element, and Fun(Y, Z) is the set of functions from Y to Z for any sets Y, Z). This includes the ordinary rack cohomology with coefficients in an abelain group A, introduced in [FRS] (this corresponds to taking M = A with the trivial action of G X ), as well as the twisted rack cohomology introduced in [CES] (in this case one needs to take a Z[T, T −1 ] module M , and define a right action of G X on it by vx = T v, x ∈ X). Remark 2.4. One can also define the dual notion of rack homology. As usual, it is completely analogous to cohomology, so we will not consider it. Remark 2.5. In [AG] there is a more general definition of cohomology, with coefficients in objects of a wider category than that of G X -modules. When restricted to G X -modules, the definition there takes as differential the map d ′ , defined by 1 This group appears already in the work of Joyce [J], who pointed out that the functor X → GX is adjoint to the functor assigning to a group the underlying rack (with the conjugation operation). Thus the group GX can be viewed as the "enveloping group" of X. This complex is isomorphic to the one we consider here, by means of the map 3. The structure of rack cohomology Let M be a right G X -module. Then C n (X, M ) = Fun(X n , M ) is also a right G X -module, with the action defined on the generators by (f · y)(x 1 , . . . , x n ) = f (y ⊲ x 1 , . . . , y ⊲ x n ) · y. (1) The coboundary operator d : In particular, there is a natural right action of G X on the groups of cocycles Z n (X, M ), coboundaries B n (X, M ), and cohomology H n (X, M ). Proof. (2) Let f ∈ Z n (X, M ) and consider f y ∈ C n−1 (X, M ), defined by the formula Remark 3.3. The action f · y and the assignments f → f y , as well as (3.2), appear in [LN]. By Lemma 3.1 we can consider the subcomplex C This map will be denoted by f, g → f ⊗ g. Proof. The proof is straightforward. We note that the statement becomes false if A is nontrivial as a G X -module or g is not invariant. . Furthermore, by the same Lemma, the cohomology class of f ⊗ g depends only of the cohomology classes of f and g. Thus, we have a product Cohomology of finite racks In this section we will assume that X is a finite rack. Let M be a right G X -module, such that the kernel K of the action of G X on M has finite index. Let L be the intersection of K with the kernel Γ of the action of G X on X, and let G = G X /L (notice that G is finite). Assume that the multiplication by |G| is an isomorphism M → M . Lemma 4.1. Under these conditions the map ξ : On each term of this complex we have a projector given by P = 1 |G| g∈G g, which projects to G X -invariants. This projector commutes with the differential, so the complex C • (X, M ) is representable as a direct sum of complexes: . By Lemma 3.1, the second summand is acyclic: indeed, any cohomology class in it satisfies cP = 0, while the lemma says that cP = c, hence c = 0. This implies the desired statement. In particular, for any ring R with trivial G X -action, such that N = |G 0 X | is invertible in R (for example, is an algebra, and if M is an R-module with a compatible G X action then H • (X, M ) is a left module over this algebra. Let Orb(X) = X/G X be the set of G X -orbits on X, and m = | Orb(X)|. The main result in this section is Before proving the theorem, we will derive a corollary. Proof. The first assertion is clear taking R = Q. For the second one, take R = Z[ 1 N ] (or R = Z/p, p ∤ N ) and apply the universal coefficent theorem. Remark 4.4. This, together with the lower bounds for the Betti numbers of the quandle and degeneracy cohomology in [CJKS] and the splitting result of [LN], implies that those lower bounds are in fact equalities. Proof. (of Theorem 4.2). Since M G X = H 0 (X, M ), for any M we have an obvious multiplication mapping µ : T • (H 1 (X, R) , which is compatible with the algebra and module structures. Thus, all we have to show is that µ is an isomorphism. Let us first show that µ is injective. This is in fact the lower bound of [CJKS], but we will give a different proof. The proof is by induction in degree. The base of induction is clear. Assume the statement is known in degrees < n, and c ∈ Fun(Orb(X) n , M G X ) is such that µ(c) = 0. This means that the pullback f : X n → M of the function c is a coboundary: f = dg. Because f is invariant (under the diagonal action of G X ), , we can assume that g is invariant. This means that for any y ∈ X, we have (dg) y = d(g y ) (we recall that g y (x 1 , . . . , x l ) := g(y, x 1 , . . . , x l )). Thus, f y = dg y . But f y is a pullback of a function c y ∈ Fun(Orb(X) n−1 , M G X ), so by the induction assumption c y = 0. Hence c = 0. Now let us prove that µ is surjective. For this it suffices to show that H n (X, M ) ⊂ H 1 (X, R)H n−1 (X, M ). Let c ∈ H n (X, M ). By Lemma 4.1, the element c can be represented by an invariant cycle, f ∈ Z n inv (X, M ). By remark 3.4, f y ∈ Z n−1 (X, M ) for all y ∈ X. For each y ∈ X, decompose f y as f y = (f y ) . Thus, also f − ∈ Z n (X, M ). Let us see now that f ± are invariant: for any h ∈ C n (X, M ), g ∈ G X , we have the equality h y · g = (h · g) g −1 y , which implies that f + gy = f gy · P = f gy · g −1 P = (f · g −1 ) y · P = f + y , and thus (f + · g) y = (f + gy ) · g = (f + ) y . Since this equality holds ∀y ∈ X, we have f + ∈ Z n inv (X, M ) as claimed. Since f ∈ Z n inv (X, M ), we also have f − ∈ Z n inv (X, M ). Now, as G X acts trivially on cohomology, there exists h ∈ C n−1 (X, M ) such that d(h y ) = f − y for each y ∈ X. Takeh = hP . We have d((h · g) y ) = d(h gy · g) = d(h gy ) · g = f − gy · g = (f − · g) y = f − y , and thus, by (3.2), (dh) y = d(h y ) = f − y , whence dh = f − . Thus, f − is a coboundary, and we can assume that f = f + . In other words, f ∈ Fun(Orb(X), Z n−1 (X, M ) G X ). This means that f = s∈Orb(X) 1 s ⊗ f (s), where 1 s is the characteristic function of s with values in R. Since 1 s is a cocycle, we have proved that c ∈ H 1 (X, R)H n−1 (X, M ), as desired. Now let M be a semisimple finite dimensional G X -module over a field k of characteristic zero (but we do not require the image of G X to be finite). In this case, we have Proof. By a Chevalley's theorem [C], the representations C n (X, M ) = Fun(X, k) ⊗n ⊗ M are semisimple (as tensor products of semisimple representations). Therefore, there exists an invariant projector P : C • → (C • ) G X . The rest of the proof is the same as in the previous case. Recall [S] that G X is a central extension of the finite group G 0 X with kernel being the finitely generated abelian group Γ. The first complex is acyclic by Theorem 4.5, the third one is acyclic by the induction assumption, so by the long exact sequence in cohomology, the complex in the middle is also acyclic. The induction step and the corollary are proved. To compute the Betti numbers of twisted cohomology, the only lacking case is that in which the elements of the rack X act on M by a Jordan block with 1 on the diagonal. Before proving the Proposition we state two easy lemmas: Lemma 4.9. Let (C • , d) be a complex and suppose that C • = C • 1 ⊕ C • 2 and that the differential d has the form d 1 α 0 d 2 for this decomposition. Then α induces a map α n * : H n−1 (C • 2 ) → H n (C • 1 ). Consider then the short exact sequence of complexes be the connecting homomorphism. Then β n = α n * . Proof. Since d 2 = 0, we have d 1 α = −αd 2 , whence it induces a map in cohomology. The second assertion follows in a straightforward way from the definition of the connecting homomorphism. is a complex and that f : where the first complex has differential given by Proof. This follows easily from the 5-lemma. Proof of Proposition 4.8. The proof is by induction on k. If k = 1 the assertion is Corollary 4.3. Assume that the result is true for dimensions < k. Let us decompose C • = C • (X, M 1 ) ⊕ C • (X, M 2 ), where M 1 is generated by v 1 , . . . , v k−1 and M 2 is generated by v k . Notice that the differential d in C • can be written as Let us take C •′ 2 = T • (Fun(Orb(X), Q)). By Theorem 4.2, the inclusion i : C •′ 2 → C • 2 is a quasiisomorphism, and thus by Lemma 4.10 we can work with C • (X, M 1 ) ⊕ T • (Fun(Orb(X), Q)). We consider the long exact sequence 2 and consider the induced map in cohomologyᾱ * , i.e., α n * : H n−1 (C •′ 2 ) = T n−1 (Fun(Orb(X), Q)) → H n (C • 1 ) = H n (X, M 1 ). By Lemma 4.9, β n =ᾱ n * . We claim that rkᾱ * = rkᾱ. To see this, it suffices to prove that Imᾱ n ∩B n (C On the other hand, if π : X → Orb(X) is the canonical projection, we have which shows that b k−1 ∈ T n (Fun(Orb(X), Q)). But it is shown in the injectivity part of the proof of Theorem 4.2 that T n (Fun(Orb(X), Q)) ∩ B n (X, Q) = 0, and the claim is proved. Then, rk β n = rkᾱ n . But the latter is not difficult to compute: if we consider the complex (D • ,d), where D n = Fun((Orb(X)) n , Q) andd is given bŷ d(f )(a 1 , . . . , a n ) = n i=1 (−1) i f (a 1 , . . . , a i−1 , a i+1 , . . . , a n ), then it is clear thatᾱ n andd n have the same rank. Furthermore, it is well known that D • is acyclic (it gives the reduced cohomology of a simplex of dimension m − 1). It is easy then to compute the rank of d; we have rkd n = m n−1 − m n−2 + m n−3 − · · · ± 1. We add this computation to the long exact sequence (4.11) and we are done: we have rk β n = m n−1 − m n−2 + · · · ± 1, and since by the inductive assumption dim H n (C • 1 ) = m n , then rk i n = m n − m n−1 + · · · ± 1. Also, we have rk β n+1 = m n − m n−1 + · · · ± 1 and since dim H n (C •′ 2 ) = m n , we get rk p n = m n−1 − m n−2 + · · · ± 1. Thus, dim H n (C • ) = rk i n + rk p n = m n , proving the inductive step. Since for M as above we have dim M G X = 1, we have proved: Corollary 4.12. Let M be a right QG X -module on which all the elements of X act by the same operator. Remark 4.13. It is interesting to study the graded algebra H • inv (X, k), where k is a field of characteristic p dividing |G 0 X |, to which Theorem 4.2 does not apply. One may ask the following questions about this ring: • Is it finitely generated? • What is its Poincaré series? Is it a rational function? A relation with group cohomology In this section, for any rack X, we want to give a group theoretical interpretation of the group H 2 (X, A) (where A is a trivial G X -module). This group is useful in the theory of pointed Hopf algebras [G]. We start with the following obvious, but useful proposition. Proposition 5.1. Let A be a trivial G X -module. Then one has a natural isomorphism of complexes J : C n (X, A) → C n−1 (X, Fun(X, A)), n ≥ 1, where we consider the action of G X on Fun(X, A) given by (hy)(x) = h(y ⊲ x). It is given by (Jf )(x 1 , . . . , x n−1 )(x n ) = f (x 1 , . . . , x n ). In particular, it induces an isomorphism H n (X, A) → H n−1 (X, Fun(X, A)). Remark 5.2. We note that this proposition becomes false if the action of G X on A is not trivial. Now we give the main result of this section. Let M be a right G X -module. Propositions 5.1 and 5.3 imply Proof. (of Proposition 5.3) Let C • (G, M ) be the standard complex of a group G with coefficient in a right Gmodule M . Let η : C 1 (G X , M ) → C 1 (X, M ) be the homomorphism induced by the natural map X → G X . It is easy to show that this homomorphism maps cocycles to cocycles and coboundaries to coboundaries. Thus, it induces a homomorphism η : H 1 (G X , M ) → H 1 (X, M ). Thus, our job is to show that any f ∈ Z 1 (X, M ) lifts uniquely to a 1-cocycle on G X . To do this, recall that a map π : G X → M is a 1-cocycle iff the mapπ : G X → G X ⋉ M given by g → (g, π(g)) is a homomorphism. On the other hand, we have a map ξ f : X → G X ⋉ M given by ξ f (x) = (x, f (x)). So we need to show that ξ f extends to a homomorphism G X → G X ⋉ M . But the group G X is generated by X with relations xy = (x ⊲ y)x. Thus, we only need to check that ξ f (x) satisfy the same relations. But it is easy to check that this is exactly the condition that df = 0. We are done. It is easy to verify that this is a rack structure on the product; we shall denote it by (X ⋉ N, ⊲) (it is actually the same structure as in [AG] for the left X-module N with x · n = nx −1 ). We have then, with a straightfoward proof, Lemma 5.5. Let ω : X → N and defineω : X → X ⋉ N byω(x) = (x, ω(x)x −1 ). Thenω is a rack homomorphism if and only if ω ∈ Z 1 (X, N ). Take α : X ⋉ N → G X ⋉ N , α(x, n) = (x, nx). One can check that in the square each of ω, π determines uniquely the other in such a way that the diagram is commutative. Remark 5.6. Corollary 5.4 holds also when A is nonabelian. In this case H 2 (X, A) is the quotient of the set Z 2 (X, A) = {f : X × X → A | f (x ⊲ y, x ⊲ z)f (x, z) = f (x, y ⊲ z)f (y, z)} by the equivalence relation f ∼ f ′ if there is a γ : X → A such that f ′ (x, y) = γ(x ⊲ y)f (x, y)γ(y) −1 . The proof is the same as in the abelian case.
2014-10-01T00:00:00.000Z
2002-01-29T00:00:00.000
{ "year": 2002, "sha1": "c03ad47a901e2d064ff9921392a96ab1781f83b6", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/s0022-4049(02)00159-7", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "f3ab6564f1d440dbf40f8eef615de88d7a083496", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
265135028
pes2o/s2orc
v3-fos-license
An In Vitro Evaluation of Anti-inflammatory and Antioxidant Activities of Cocos nucifera and Triticum aestivum Formulation Background Medicinal plants are traditionally used in Ayurveda, Unani medicine, and Siddha as primary sources of drugs, and mankind has exploited the therapeutic properties of these herbs throughout history. Coconut (Cocos nucifera), a common ingredient of Indian sub-continental cuisine, has been proven to possess various medicinal properties; similarly, wheatgrass (Triticum aestivum) is of greater medicinal value and is known as the powerhouse of nutrients and vitamins. These have been used individually, but there is limited data on the synergistic use of these products. Thus, the present in vitro study was designed to prepare an oral gel from the extract of C. nucifera and T. aestivum and to assess its cumulative anti-inflammatory and antioxidant activity. Materials and methods C. nucifera extract and T. aestivum extract were prepared separately, and gel formulation was done. The formulated gel was tested for its anti-inflammatory and antioxidant activity. Results The results of the present study demonstrated that the anti-inflammatory property of the gel formulation was greater as compared to the standard (diclofenac), with the highest percentage of inhibition of 90.1% at 50 μl. With regard to the antioxidant property, we found that it was comparable to the standard (ascorbic acid) at various concentrations, with greater activity at 50 μl. Conclusion The oral gel formulation of coconut (C. nucifera) and wheatgrass (T. aestivum) showed better anti-inflammatory and a comparable antioxidant activity. Thus, this formulation may be employed as an adjunct to the commercially available oral gel preparations. Introduction The utilization of natural products for therapeutic purposes is an age-old science, and for a very long time, the primary sources of medicines were minerals and products of various plants and animals [1].These herbal formulations are the basis of many ancient medicines, including Ayurveda, Siddha, and Unani.Research on the plants and their products or formulations used in traditional medicine has attracted a lot of interest in recent years.Indigenous cultures globally have turned to herbal remedies as a result of the resurgence of interest in medicinal plants [2].Natural resources are becoming increasingly sought-after as a source for the creation of nutraceuticals, medications, medicines, and various cosmetics. Inflammation is a complex immune-mediated innate system of defense produced due to physiological disturbances in the body [3].This process is mediated by chemical mediators in response to various noxious stimuli that cause vasodilation, increased capillary permeability, and escalated vascular supply to the site of injury.The metabolism of arachidonic acid plays a significant role in the chain of events that make up the process of inflammation.Regardless of the etiology, the genesis of pain is mostly inflammatory.Nonsteroidal anti-inflammatory drugs (NSAIDs) are still the mainstay of most physicians all over the world for the treatment of any inflammatory reaction and to relieve pain [4].There are various side effects associated with NSAIDs, viz., significant gastrointestinal upset, gastritis, ulceration, and hemorrhage.The long-term use of these drugs could thus be considered a major health concern, and the search for safer, more economical, and more efficient drugs is now of great interest.It has been found that many herbal plants function similar to NSAIDs, significantly mediating inflammatory pathways [5]. Alternatively, antioxidants can counter the harmful effects of free radicals.For faster and more efficient recovery, more antioxidants from food and other sources, including medicinal plants, are needed in diseased conditions [6].Therefore, it is essential to assess protection against oxidative stress and associated illnesses, and therefore, a different and far more likely antioxidant regimen based on herbal origin would be required, and natural products may be exploited to minimize the side effects of commercially available products. Cocos nucifera (coconut tree) bears various medicinal properties.Coconut milk, an emulsion made from its endosperm, is a rich source of lipids, carbohydrates, and proteins.Besides, it contains several minor chemicals, such as phenolic compounds, and has proven to possess antioxidant, antimicrobial, and antiinflammatory activities [7].Triticum aestivum, also known as wheatgrass, is an easily grown plant and is considered to be a healthy nutrient food.Studies have proven this plant to have antioxidant and antimicrobial properties owing to its nutritional contents like chlorophyll, vitamin A, vitamin C, and many bioflavonoids. Previous studies have assessed the medicinal properties of these two herbs separately, but none of the studies have tried to assess their cumulative properties to be used as an oral gel.In our previous study, we formulated an herbal gel combination, and it was proven to have potent antimicrobial activity and less cytotoxic effects [8].Thus, the present study was orchestrated to analyze and evaluate the antioxidant and anti-inflammatory properties of an oral gel formulation with C. nucifera and T. aestivum extract and compare the properties with a commercially available oral diclofenac gel.To the best of our knowledge, the herbal combination used in this study is the first of its kind to be used as an oral gel. Preparation of extract and gel The present in vitro study was designed and conducted in the institutional setting of Saveetha Dental College and Hospitals, Chennai.The endosperm of coconut and wheatgrass were procured from the local market of Chennai for the preparation of extract and gel according to the specifications explained previously [8].Briefly, 100 mL of distilled water was set to boiling, to which 2 gm of T. aestivum (wheatgrass) powder was added.The boiling was carried out for half an hour, followed by filtration through Whatman filter paper. The prepared extract was re-boiled for 10 minutes and finally filtered for further analysis.Regarding C. nucifera, 50 gm of fresh coconut was chopped and ground to a smooth paste, to which 100 mL of distilled water was added.This mixture was continuously agitated for one hour, followed by filtration and re-boiling for another 20 minutes. For the preparation of the gel formulation, 0.5 gm of hydroxypropyl methylcellulose was mixed with 20 mL of distilled water, to which 0.5 gm of carbachol was added.The mixture was continuously stirred during the entire procedure.This mixture was added to the C. nucifera extract, to which T. aestivum extract was further added and mixed, yielding the formulation in gel form (Figure 1). Anti-inflammatory activity assessment using albumin denaturation assay This assay involves the addition of 2 mL of 1% bovine albumin fraction to 400 mL of herbal extracts of C. nucifera and T. aestivum at a concentration of 50-150 mL.The pH of the reaction was maintained at 6.8 utilizing 1N hydrochloric acid.The subsequent steps involved incubation at room temperature, followed by heating in a water bath at 55° C for 20 minutes, followed by cooling.The absorbance value was recorded at 600 nm.Diclofenac sodium was used as standard at concentrations of 10 μl, 20 μl, 30 μl, 40 μl, and 50 μl. The percentage inhibition was estimated as % inhibition: (control OD − (sample OD/control OD)) × 100, where control OD designates the absorbance of negative control and sample OD stands for the absorbance of the test sample [9]. Antioxidant activity of the prepared oral gel The antioxidant capabilities of C. nucifera-and T. aestivum-based oral gel were tested using the 1,1diphenyl-2-picrylhydrazyl (DPPH) free radical scavenging assay.The sample, which is an oral gel preparation, was dissolved in methanol at different concentrations (100-500 mg). 3 mL of 0.5 mm DPPH in methanol was added to 0.5 mL of dissolved oral gel preparation and incubated at room temperature for 30 minutes.As a standard control, ascorbic acid was used.The hue of the reaction mixture changed from yellow to violet as oral gel preparation lowered DPPH by donating a hydrogen atom.Ultraviolet-visible (UV-Vis) spectrophotometer was used to assess the absorbance of the mixture at 517 nm: % scavenging activity = ((absorbance of control − absorbance of the test)/absorbance of control) × 100. Statistical analysis The obtained values were entered in a Microsoft Excel sheet.The tests were repeated three times, and a mean of the three values was obtained using IBM SPSS Statistics for Windows, Version 26.0 (Released 2019; IBM Corp., Armonk, New York, United States).For the comparison of means between the control and experimental groups at different concentrations, an independent t-test was used.A p-value of less than or equal to 0.05 was considered significant. Anti-inflammatory activity The results showed that our plant-based oral gel formulation showed a better anti-inflammatory effect compared to the standard diclofenac used in the study.Different concentrations showed the inhibition of protein denaturation of 74.6, 77.4,83.6, 86.8, and 90.1 (values in %) (Table 1 and Figure 2) and were comparable with the commercial oral gel with a maximum anti-inflammatory activity of 90.1% at 50 μl concentration.Even at the lowest concentration of 10 μl, the anti-inflammatory activity was shown to be 74.6%.At concentrations of 10 μl, 20 μl, and 30 μl, the anti-inflammatory activity of oral herbal formulations was significantly more than the control (p<0.05); the activity was better at 40 μl and 50 μl also; however, the results were not statistically significant. Antioxidant activity The antioxidant assay of the herbal gel formulated was done using free radical DPPH.Methanolic violetcolored DPPH is reduced by hydrogen or electrons to a yellow-or non-colored solution.The percentage of inhibition of the prepared formulation was also noted.At 10 μl, the color changed from yellow to violet with a high scavenging activity of 76.65%, and the highest activity of 99.15% was found to be at 50 μl for standard (Figure 3).The greatest antioxidant activity for the formulated gel was noted at 50 μl. Pertaining to the antioxidant activity, no statistically significant difference was noted between the oral formulation and the control, implying that both showed similar activity (p-value>0.05). FIGURE 3: Bar graph showing the antioxidant activity of the oral gel formulation at various concentrations Dose-dependent antioxidant activity was reported to be comparable to ascorbic acid's DPPH scavenging activity. Discussion Studies have demonstrated the effectiveness of plant-derived bioactives in treating oral lesions, including recurrent aphthous stomatitis, mucositis brought on by chemotherapy and radiotherapy, erosive leukoplakia, and oral lichen planus [10].In this study, the herbal gel formulation was made from the combination of C. nucifera and T. aestivum extract, and their antioxidant and anti-inflammatory properties were assessed.From the results obtained in our study, the oral gel formulation was found to possess greater antioxidant and anti-inflammatory activities. Various studies have reported the medicinal values of the two herbs used in the gel preparation ( C. nucifera and T. aestivum).In a previous study, it was demonstrated that C. nucifera and T. aestivum oral gel preparation bear significant antimicrobial effects and minimal cytotoxicity [8].Other studies have evaluated independently the anti-inflammatory and antioxidant properties of C. nucifera and T. aestivum, but a synergistic effect has not been studied so far.According to Varma et al., coconut oil has been demonstrated to have high anti-inflammatory activity by inhibiting the cytokine level [11].It was opined that the antiinflammatory activity of coconut is attributed to its regulatory role in the MAPK signaling pathway, which is one of the key pathways in various cellular and subcellular mechanisms [12]. A variety of phytonutrients, including tannins, phenols, flavonoids, triterpenes, steroids, leucoanthocyanidins, and alkaloids, were found to be present in the ethanolic extract of the coconut mesocarp, whereas the butanol extract of coconut has revealed the presence of condensed tannins, triterpenes, and saponins [13].In a study by Silva et al., it has been found that the crude extract greatly reduced the amount of paw edema in rats induced by histamine (150 mg/kg) and serotonin (100 and 150 mg/kg).This could be due to the action of coconut on inflammatory mediators or through direct receptor blockage [14]. Choudhary et al. demonstrated the increased anti-inflammatory activity of wheatgrass is due to its chlorophyll content [15].Many clinical studies have recently suggested that wheatgrass has therapeutic benefits for a variety of illnesses [16].These results were similar to the results of our study in which the antiinflammatory activity of the gel which is a combination of these two extracts was found to be better than that of the control (diclofenac).As a result, there is a cumulated effect leading to enhanced antiinflammatory properties of both herbs which could be due to the high amount of polyphenols and flavonoids present in these herbs. The antioxidant activity of wheatgrass at various concentrations according to their seed developmental stages was found to have potent antioxidant properties [17].In an animal study by Mat et al., lipid peroxidation in rats is found to be prevented by ascorbic acid in coconut water, while the other constituent of coconut water, L-arginine, reduces the generation of free radicals [18].Figure 4 shows a diagrammatic representation of the pathways involved in the anti-inflammatory and antioxidant properties.This is in concordance with the results obtained in our study, wherein the antioxidant activity of the gel was comparable to the control (ascorbic acid) with the highest concentration of 95% at 50 μl.It was discovered that chlorophyll, which is one of the active ingredients in wheatgrass extract, is what prevents carcinogens from being metabolically activated [16]. The improved antioxidant property may also be attributed to the free radical scavenging mechanism of coconut as demonstrated in various studies [17,19].Wheatgrass is rich in vitamins which have the ability to scavenge free radicals and are also important components of antioxidant defense systems.This could aid in regulating the release of hydrogen peroxide from the cells.Not only the antioxidant activity but also the high protein and amino acid contents of wheatgrass play a role in cleansing the toxins from the body and possess greater anti-inflammatory activity.In a study by Dasari et al., it has been proven that wheatgrass has the ability to have an anti-inflammatory effect on rat paw edema induced by formalin [20]. Additionally, the high content of vitamins and enzymes which are easily absorbable and other minor mineral elements of both phytomedicines could be attributed to the cumulative enhanced effect of the gel.The antioxidant activity of these phytochemicals has also been emphasized to play a role in chemoprevention by reducing the oxidative stress responsible for the pathogenesis of cancer [21]. Considering the numerous medicinal properties of this herbal gel and its biocompatibility, this gel can be well accepted as an alternative oral gel for the treatment of oral lesions and for various other therapeutic purposes.Formulation of such herbal gel with enhanced properties would pave the way for developing herbal alternative medicine which is highly safe and effective. Limitations The standard properties of the gel, like the viscosity test, were not analyzed for the prepared oral gel.In future studies, it is recommended to test the activities of this gel with higher concentrations, and more combinations of herbs can be added and tested for their properties.Another limitation pertains to the confounding bias of not individually testing the anti-inflammatory and antioxidant activities of C. nucifera and T. aestivum, as we used the combination of these herbs. Conclusions The combination of these two herbs has been shown to possess potent anti-inflammatory and antioxidant properties.In addition to previously tested good antimicrobial property and low cytotoxicity, the cumulative effects of these two products would aid in the treatment of various oral lesions.The unfavorable side effects of modern medicine have already drawn people's attention to natural remedies.In the future, the usage of more affordable and safer natural products should be employed to compete with contemporary pharmaceuticals.Furthermore, the biological benefits of antioxidant-rich herbs on illnesses linked to oxidative stress require further studies alongside in vivo research and clinical trials. FIGURE 2 : FIGURE 2: Bar graph showing the anti-inflammatory activity of the gel formulation at various concentrations TABLE 1 : Comparative analysis of the anti-inflammatory property of the prepared formulation with control; a p-value of less than or equal to 0.05 was considered significant 2023 G et al.Cureus 15(11): e48649.DOI 10.7759/cureus.486493 of 8 Table 2 demonstrates the radical scavenging activity of the C. nucifera and T. aestivum gel formulation at various concentrations.
2023-11-13T16:04:09.808Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "60efd4bdf19562cb747406298f524c0e5e13bda7", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/197681/20231111-23771-w27wvk.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c327d5ca83197f720e4bfe9d0177997f7ac5fb3f", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
746652
pes2o/s2orc
v3-fos-license
Comparative Transcriptome Analysis of Bacillus subtilis Responding to Dissolved Oxygen in Adenosine Fermentation Dissolved oxygen (DO) is an important factor for adenosine fermentation. Our previous experiments have shown that low oxygen supply in the growth period was optimal for high adenosine yield. Herein, to better understand the link between oxygen supply and adenosine productivity in B. subtilis (ATCC21616), we sought to systematically explore the effect of DO on genetic regulation and metabolism through transcriptome analysis. The microarrays representing 4,106 genes were used to study temporal transcript profiles of B. subtilis fermentation in response to high oxygen supply (agitation 700 r/min) and low oxygen supply (agitation 450 r/min). The transcriptome data analysis revealed that low oxygen supply has three major effects on metabolism: enhance carbon metabolism (glucose metabolism, pyruvate metabolism and carbon overflow), inhibit degradation of nitrogen sources (glutamate family amino acids and xanthine) and purine synthesis. Inhibition of xanthine degradation was the reason that low oxygen supply enhanced adenosine production. These provide us with potential targets, which can be modified to achieve higher adenosine yield. Expression of genes involved in energy, cell type differentiation, protein synthesis was also influenced by oxygen supply. These results provided new insights into the relationship between oxygen supply and metabolism. Introduction Adenosine plays an important role in biochemical and physiological processes including tissue protection and repair properties, neurotransmission and anti-inflammatory [1,2]. Furthermore, it has important medical uses in heart diseases [3], as it plays a pivotal role in heart coronary circulation and heart protection and can also be used to effectively terminate certain supraventricular tachycardia (SVT) that involves atrioventricular (AV) node in the reentry pathway [4,5]. Adenosine is produced mainly by industrial fermentation. It is known that the biosynthesis of nucleotide proceeds from 59phosphoribosyl-pyrophosphate (PRPP), which is formed from ribose-59-phosphate and ATP. The first complete purine nucleotide, inosinic acid (IMP), is synthesized through several reactions, and then AMP is synthesized through branched pathways. Adenosine is finally synthesized by dephosphorylation of AMP [6]. Since 1968, Konishi et al have reported about adenosine fermentation [7]. Thenceforth, Haneda et al put great efforts to it [8,9]. They found that a xanthine auxotroph Bacillus strain lacking adenase was a good adenosine producer; meanwhile excess guanine, a slightly acidic medium (pH 5.0-6.0) and sufficient oxygen supply were optimum conditions for adenosine biosynthesis. Dissolved oxygen (DO) level is an important factor in aerobic fermentation that could significantly influence bacteria's metabo-lism and product yield. Oxygen plays an important role in biomass synthesis, cell morphology, biochemical degradation, electron transport and ATP availability [10,11]. It has also been reported that oxygen can have an effect on various cell functions including carbon metabolism, antibiotic production and stress response [12]. The relationship between oxygen supply and fermentation productivity has been a focal point in aerobic fermentation and, thus optimization of DO concentration is always necessary for industrial bioprocess [13][14][15][16][17]. Our previous works also showed that oxygen supply is essential for adenosine-producing fermentation. However high-level of DO in the growth period restrains the overproduction of adenosine. The limitation of oxygen in the early stage is beneficial for the adenosine biosysthesis [18]. The intrinsic correlativity between oxygen supply and adenosine biosynthesis is still not well understood, and no study has attempted to comprehensively explore the mechanisms of how oxygen is integrated into the regulatory network of adenosine biosynthesis. In this work we investigated the effect of the DO concentration on adenosine productivity by using comparative transcriptome analysis between adenosine fermentations with different oxygen supply. The results provided new insights to better elucidate the signaling network between DO level and adenosine yields, and provide guides for further improvement of adenosine production. Oxygen supply and adenosine yield In this research, two batches of adenosine fermentation were conducted, one with high oxygen supply (agitation 700 r/min) and the other with low oxygen supply (agitation 450 r/min). As shown in Figure S1, in the early stage (0-20 h) the DO was at a very low level (almost zero, oxygen limitation) at 450 r/min agitation, while the DO was at a much higher level at 700 r/min agitation. The increasing DO after 20 h suggested that the oxygen supply was enough for fermentation in the later stage. Adenosine yield under low oxygen supply was twice (3.63 g/L) as under high oxygen supply (1.81 g/L). It was supposed that oxygen limitation in the early stage may contribute to a higher adenosine yield. Herein, transcriptome analysis was undertaken for B. subtilis to elucidate relationship between oxygen supply and adenosine yield. Functional category enrichment of transcriptome data We studied the changes in gene expression in the early stage under high oxygen supply and low oxygen supply. Samples were taken respectively at 12 h and 18 h of fermentation process. Two independent cultured replicates were performed. The replicates were highly reproducible and the average coefficient of variation (CV) of the replicates was low (,16%). Pair-plots of intensities revealed a high Pearson correlation coefficient (.0.9, p,2.2e-16) (Text S1). Considering the samples from high oxygen supply as the control, we identified 434 (166 down-regulated genes, 268 upregulated genes) genes at 12 h and 854 (424 down-regulated genes, 430 up-regulated genes) genes at 18 h as significantly differently expressed (more than two folds) genes (listed in Table S1 and Table S2). The transcriptome data were analyzed based on diverse sources of gene functions using two computational tools, including MIPS (http://mips.gsf.de/proj/biorel/bacillus_subtilis.html) and T-profiler analysis (http://www.science.uva.nl/,boorsma/t-profilerbacillusnew/). MIPS functional analysis was used to assess functional category enrichments ( Fig. 1 and Table S3). The up-regulated genes at 12 h exhibited a significant enrichment in functions of metabolism, energy, cell type differentiation and biogenesis of cellular components (Fig. 1A). Other significant categories in the upregulated genes belong to the functions of cell cycle, protein fate, cell rescue and subcellular localization (Fig. 1A). The significant categories of the down-regulated genes at 12 h under low oxygen supply belong to the functions of metabolism and protein with binding functions (Fig. 1A). The up-regulated genes at 18 h showed significant enrichments in functions of energy, cell type differentiation and cell type localization (Fig. 1B). Other significant categories belong to the functions of metabolism, cell cycle, protein fate, protein with binding functions, cellular transport, cell rescue, biogenesis of cellular components and subcellular localization. The down-regulated genes at 18 h under low oxygen supply showed significant enrichment in cellular transport functions (Fig. 1B). Other significant categories belong to functions of metabolism, cell rescue, interaction with the environment and subcellular localization (Fig. 1B). Transcriptional factors play a central role to restructure the transcriptome responses to environmental signals. The microarray data were subsequently analyzed using T-profiler to identify some transcriptional factors in response to DO level change. T-profiler is a computational tool that uses the t-test to score changes in the average activity of predefined groups of genes based on Gene Ontology categorization, upstream matches to a consensus transcription factor binding motif, or KEGG pathway [19]. It transforms transcriptional data of single genes into the behavior of gene groups, reflecting biological processes in cells (TF model, KEGG model and Subtilist model). In this study, all transcriptome data were online performed for T-Profiler analysis (http://www. science.uva.nl/,boorsma/t-profiler-bacillusnew/). The gene groups with significant T values (E-value, 0.05, TF model) are presented in Tables 1. Seven co-regulated gene groups were found significantly disturbed by oxygen limitation at time point of 12 h in adenosine fermentation process, including SinR-Negative, FNR-Postive, Rok-Negative, ResD-Postive, SigF, SpoIIID-Negative, and SigE. Among them, five were related to sporulation and other cell fate (SinR, Rok, SigE, SigF, and SpoIIID); two were related to oxygen metabolism (FNR and ResD). In B. subtilis, SigE and SigF are both sporulation-specific sigma factors [20], while SpoIIID acts as a repressor during sporulation stage III to V [21,22]. Significant positive T-Value of SigE, SigF and SpoIIID (Table 1) demonstrated that the genes dependent on SigE and SigF and genes repressed by SpoIIID were overexpressed. Those derepressed genes in the SpoIIID-Negative group are almost SigE-dependent (Table 1). SinR is a dual-functional regulator that activates motility and represses competence [23]. Negaive T-Value of SinR-Negative indicated that the genes negatively regulated by SinR were partially repressed. Indeed, genes including yvfA, sipW, tasA, and yveK, which encode biofilm matrix, were significantly repressed [24][25][26][27]. Rok is not only a repressor of competence, but also a repressor of a number of genes that encode products with antibiotic activity in B. subtilis [28]. Positive T-value of Rok-Negative revealed that the genes repressed by Rok regulator were derepressed. The significantly derepressed genes included ab-lABCFDG-sboAX and comK ( Table 1). The sbo-alb operon was involved in subtilosin production [29] and was also directly positively regulated by ResD, an oxygen responser, which is activated by oxygen limitation [30]. FNR is, a sensor of oxygen that controls genes involved in facilitating adaptation to growth under oxygen limiting conditions, was induced by oxygen limitation [31]. Positive T-Value of FNR-Positive demonstrated that FNR-regulon was activated (Table 1). It was understandable that the genes in FNR-regulon were induced under low DO level. ResD is required for both anaerobic and aerobic growth [32]. When B. subtilis grows under oxygen limitation condition, it activates genes related to nitrate respiration. Positive T-Value of ResD-Positive showed that the genes of ResD-Postive group were activated, including sbo-alb operon, nasDE, fnr, and resDE ( Table 1). The nasDE genes, members of nasBCDEF operon, which encode NADH-dependent nitrite reductase required for both anaerobic respiration and nitrogen metabolism, were induced by either nitrogen limitation or oxygen limitation [33,34]. Eight significantly regulated gene groups were found at time point of 18 h in adenosine fermentation process, including SigE, Strcon-Negative, SigF, SpoIIID-Negative, SigB, ArfM-Positive, Fur-Negative, and CcpA-Negative (Table 1). ArfM, as a FNRdependent regulator, is required for expression of nasDE and hmp [35]. T-profiler analysis showed a strong activation of genes in ArfM-regulon under low oxygen supply. ArfM and FNR are global transcriptional regulators that activate the expression of genes encoding many of the enzymes required for the anoxic environment. CcpA is a global regulator of carbon-metabolism in B. subtilis that controls carbon metabolism and mediates carbon metabolite repression (CCR) [36][37][38]. The significant negative T-Value for CcpA-Negative demonstrated that the genes of CcpA-Negative group were repressed, and indicated that oxygen limitation is beneficial for glucose metabolism in our experiment condition. SigB is a general stress respose regulator that controls at least 150 genes. The members of SigB-regulon are transiently induced following heat shock; salt, ethanol, or acid stress; or limitation of glucose and phosphate starvation [39]. In this study, we observed a significantly positive T-Value of SigB under low oxygen supply, which revealed that oxygen limitation activated the expression of some genes in SigB-regulon (Table 1). In B. subtilis, iron homeostasis is regulated by Fur, which represses expression of genes related to siderphore biosynthesis and iron uptake proteins [40]. Two factors have been proved to induce Fur-regulon: iron limitation and oxidative stress [41,42]. The negative T-value of Fur-Negative showed that the gene group was repressed. Because culture medium used in our study was a rich medium, we hypothesized that the repression of Fur-regulon was likely related to oxidative stress, viz. high oxygen supply may result in oxidative stress to some degree. Strcon-Negative was involved in energy. Positive T-value indicated that genes of Strcon-Negative were partially derepressed. Obviously, functions of these gene groups are consistent with enriched functional categories. Effect of low oxygen supply on metabolism Carbon metabolism. The average consumption rate of glucose was much higher under low oxygen supply (0.64 g/(L?h)) than high oxygen supply (0.16 g/(L?h)). Consistent with the higher glucose consumption rate, a number of genes involved in glucose utilization were up-regulated under low oxygen supply. One glucose uptake gene glcU, whose expression was dependent on a forespore(late)-specific sigma factor (SigG) [43], was overexpressed at 18 h. Two genes involved in glycolysis (gapA and eno) were also up-regulated at 18 h. The pathways of pyruvate metabolism were promoted under the low oxygen supply. The alsSD operon (encoding acetolactate synthase and decarboxylase) was obviously up-regulated, suggesting that the acetoin formation from pyruvate, one of the overflow metabolism pathways that serve to excrete excess carbon from the cell, was activated by oxygen limitation. The operon pdhABCD, which encodes pyruvate dehydrogenase complex (PDH) during the biosynthesis of acetyl-CoA from pyruvate, was also induced under the low oxygen supply. Moreover, the genes mmgABC that involve in branched-chain amino acids and fatty acids degradation, which will generate additional acetyl-CoA, were up-regulated. Acetyl-CoA stands at crossroads of many metabolism pathways, such as citrate synthesis, acetate formation, fatty acid synthesis and amino acids synthesis (leucine, cysteine, methione, SAM, arginine and glycine). When grown under glucose-excess conditions, the Bacillus subtilis metabolizes a large proportion of the glucose only as far as pyruvate and acetyl-CoA and subsequently convert these compounds to by-products (lactate, acetate and acetoin) of metabolism, which are excreted to the extracellular environment [44]. Induction of alsSD and pdhABCD might suggest that the bacteria were in a glucose excess state under low oxygen supply. The genes involved in glycogen synthesis (glgABCDP at 12 h, glgACP at 18 h) were also upregulated. These results were consistent with the study of Li et al, whose results also revealed that low agitation promoted glucose consumption rate in pyruvate fermentation of Torulopsis glabrata [45]. We also investigated accumulation of pyruvate, lactate, acetoin and acetate at the final time point of the fermentation processes. The results were well in agreement with transcriptome analysis. As shown in the Figure 2, concentrations of pyruvate, lactate and acetate, which are the metabolite products in carbon excess conditions, were much higher under low oxygen supply. However, the genes related to utilization of other carbon sources, such as fruA, gamP, licC, malP, treP, iolBCFH, rbsACBK, and citM, were all down-regulated by oxygen limitation. These genes were involved in the transport of fructose, glucosamine, lichenan, maltose, inositol, ribose and citrate respectively. Metabolism of amino acids. The intracellular amino acids in B. subtilis were analyzed. Most of the amino acids were in low concentrations except glutamate, which was most abundant in this study (Fig. 3). This was also found in other bacteria; for instance, in B.megaterium and E. coli the principle constituent of amino acids was glutamate [46,47]. The intracellular pools of amino acids, especially glutamate, were more pronounced under high oxygen supply at 12 h in comparison to low oxygen supply, whereas the intracellular pools of amino acids were almost at the same level at 18 h (Fig. 3). Amino acids are important metabolites that must be maintained at an adequate level to serve for some physiology processes [48]. Lower concentration of amino acids under low oxygen supply indicated that the metabolic activity of amino acids was lower. This conclusion is supported by transcriptional observation. In our transcriptome data, genes encoding glutamate family degradation pathways (rocG, rocACDF and hutUG) were downregulated under low oxygen supply, while genes involved in glutamate synthesis (gltAB) were up-regulated. In bacteria, metabolism of glutamate modulates nitrogen-carbon metabolism balance and is well regulated through balancing distribution of 2oxoglutarate. When the bacteria were in a carbon excess state, the 2-oxoglutarate will be siphoned off by glutamate synthase and catabolism of exiting amino acids will be avoided [44]. Upregulation of gltAB and down-regulation of rocG, rocAC, and hutUG might indicate a carbon excess state the bacteria confronted under low oxygen supply. The concentration of 2-oxoglutarate was lower under low oxygen supply (data not shown). In microorganisms, 2oxoglutarate always indicates nitrogen deficiency, while glutamine indicates nitrogen sufficiency [49]. Lower 2-oxoglutarate concentration might indicate that the nitrogen source was sufficient under low oxygen supply. This nitrogen sufficiency was probably resulted from enhanced glutamate synthesis. Genes involved in histidine synthesis (hisA, hisFHIZ), chorismate (aroACF) were repressed at either 18 h or 12 h under low oxygen supply. Because histidine and nucleotide synthesis uses the same precursor, phosphoribosyl pyrophosphate (PRPP), the decrease of histidine synthesis might prompt nucleotide synthesis. The genes of arginine synthesis pathway (argBDFGH) were up-regulated at 12 h, and the isoleucine synthesis pathway genes (ilvABC) were up-regulated at 18 h. Three genes (bcd, bkdAA, bkdAB) involved in degradation of branchedchain amino acids were up-regulated at 12 h under low oxygen supply. The aspB gene that involve in asparate synthesis was upregulated at 18 h. Expression of genes encoding sulfur assimilation pathway (ssuABC) were significantly down-regulated at 18 h under low oxygen supply. A number of genes for ribosome assimilation (rplMOQW, rpmCDFJ, rpoE, rpsABGLM) were up-regulated at 18 h. Low oxygen supply also exhibited great effect on the transport of amino acids. The glutamine ABC transporter genes (glnHMPQ) were significantly up-regulated under low oxygen supply, while ammonium uptake gene (nrgA) and proton/glutamate symport protein (glcT, glcP) were significantly down-regulated. Metabolism of Purine Nucleotides. The purine nucleotides metabolism that directly related to adenosine production was significantly influenced by oxygen supply. Eight genes of the xanthine catabolism (pucH, pucM, puck, pucEDCBA), which was inhibited by nitrogen sufficiency [50], were downregulated under low oxygen supply. Since the adenosine producing strain was xanthine auxotroph, the repression of xanthine degradation probably benefits adenosine biosynthesis and growth. To confirm this hypothesis, we evaluated the effect of xanthine on adenosine production. It was found that adenosine production of the fermentation processes with sufficient xanthine was continuously and stable, whereas in the fermentation processes with no xanthine addition adenosine was degraded in the later period (Fig. 4A). This result strongly suggested that xanthine was important for adenosine production. The significance of the effect that xanthine exerted on adenosine production was tested using one-way ANOVA with Dunnett Mutiple Conparisions test (GraphPad InStat, GraphPad Software Inc., San Diego CA). The results showed appropriate xanthine addition could prompt adenosine production significantly (Text S2). Moreover, we analysed the xanthine concentration over the time course of fermentation process under different DO conditions. As shown in the Figure 4B, the xanthine of the medium was exhausted much earlier under high DO level than low DO level. These results further supported that inhibition of xanthine degradation was the reason that low oxygen supply enhanced adenosine production. The purA and genes of pur operon (purBCDEHKLMNQST), which are directly involved in de novo purine nucleotide synthesis, were down-regulated under low oxygen supply at 12 h, whereas the genes of pur operon exhibited no significantly differential expression at 18 h except purK that was over expressed. Moreover, a gene (pbuG) involved in hypoxanthine/guanine permease was down-regulated. Obviously, down-expression of pur operon was a restriction for higher adenosine production under low oxygen supply. The purine synthesis pathway involved of a lot of substrates, such as glycine, PRPP, formate, glutamine, fumarate and asparate. In the transcriptome data, the genes related to glutamine transport (glnHMPQ) and asparate formation (aspB) were up-regulated. However, one gene (gntZ) of PRPP formation pathway was down-regulated at 18 h. The pur operon and pbuG were repressed by PurR whose activity was inhibited by PRPP and activated by adenine-contained compound. If the repression of pur operon and pbuG by PurR could be relieved, adenosine yield might be higher. Accumulation of other adenine nucleotides at the final time point of the fermentation process was also analyzed. The concentrations of adenine, AMP and IMP were also higher under low oxygen supply ( Figure 5). These data suggested that low oxygen supply might benefits accumulation of adenine nucleotides. Higher accumulation of adenine nucleotide might explain why the pur regulon was repressed under low oxygen supply. Effect of low oxygen supply on respiration and sporulation Respiration and energy metabolism. The oxygen limitation induced 14 bioenergetics-related genes, including 4 ATP synthase genes (atpAFGH) and 10 cytochrome oxidase genes (qoxABCD, ctaCDEF and cydAB). Induction of these genes showed that aerobic respiration and ATP synthesis were accelerated. Enhancement of ATP synthesis might reveal a demand for ATP or an energy starvation compared to the strain under high oxygen supply. It was also found the induction for cydABCD, resABC, and qoxABCD under anaerobic conditions [12]. The result may suggest that the induction of terminal oxidases may be needed to compensate for the lack of electron acceptors. Enhancement of ATP synthesis might reveal a demand for ATP or an energy starvation compared to the strain under high oxygen supply. Either demand for ATP or energy starvation would accelerate glucose consumption rate. The nasDE and narGHI were all induced by low oxygen supply. It is known that nasDE genes encoding assimilatory nitrite reductases are induced by nitrogen limitation and oxygen limitation, while narGHI genes are induced by oxygen limitation only. Induction of nasDE and narGHI controlled by FNR showed that nitrite respiration is activated by low oxygen supply. Activation of aerobic respiration and nitrite respiration will accelerate NADH to be reoxidized to NAD + . Under low oxygen supply the glucose consumption rate was much higher; the bacteria may need more NAD + . Since the concentration of total NAD(H) is almost constant, the strain need to accelerate the regeneration rate of NAD + from NADH. Sporulation. From the transcription results, it can be seen that sporulation and competence were promoted under oxygen limitation. However, no spore formation was found in the fermentation progress (data not shown). It has been reported that D-ribose fermentation and riboflavin fermentation using the asporogenous mutant strain. It is suggested that sporulation relay may have impact on morphological phenotypes as well as metabolism. D-ribose production is related to cell elongation [53]. A random, transposon-tagged mutagenesis screening approach has found that some overproduction mutants were related to the cell morphological differentiation genes (yloN, yjaU, phrC, cotE, sigW, fliP) [54]. In our experiment, the glucose transport gene glcU (SigG dependant) was up-regulated, while the phrC and cotE are down-regulated. It is worth study whether there is any intrinsic relationship between the sporulation process and nucleotide substance (including adenosine) production. Oxygen limitation response network and adenosine production improvement Figure 6 illustrates the postulated response network associated with low oxygen supply. From the network, we can conclude that low oxygen supply enhanced glucose metabolism and carbon overflow. Degradation of glutamate family amino acids and xanthine were repressed by low oxygen supply. Purine nucleotide synthesis was inhibited by low oxygen supply. Low oxygen supply also induced respiration. Although the adenosine productivity was higher under low oxygen supply, there were two drawbacks in adenosine fermentation under low oxygen supply. Firstly, although glucose consumption rate was much higher under low oxygen supply, a part of the glucose consumed was used in the acetoin overflow pathway and the acetyl-CoA formation pathway. There was a hypothesis that inhibition of carbon overflow pathway may increase nucleotide production efficiency [55]. Chen et al reported that citrate is a good substrate for suppressing carbon overflow [55]. Herein, we investigated the effect of citrate on adenosine fermentation. As shown in the Figure 7, citrate could increase adenosine production by 15%. Secondly, the de novo purine nucleotide synthesis pathway was inhibited at 12 h. We supposed that, if the purine nucleotide pathway could be relieved from the repression, higher adenosine yield would be achieved. We also investigated the effect of Mg 2+ , thiamine and glutamate on adenosine yield. Mg 2+ was co-transported with citrate in B. subtilis. Thiamine was involved glucogenesis, while glutamate was substrate for glucogenic conditions. Mg 2+ , thiamine and glutamate are all beneficial for adenosine yield (Fig. 7). Discussion Although many studies have demonstrated that the dissolved oxygen level has great effect on the product yield or cell growth in fermentation, few of them were involved in the mechanism of these effects of DO on metabolism and genetic regulation. In this study, transcriptional profiling was used to assess how oxygen supply influences the metabolic and genetic regulatory network in adenosine-producing B. subtilis. The results revealed that oxygen supply significantly influenced regulatory works of SpoIIID-Negative, ResD-Positive Rok-Negative, FNR-Positive, SinR-Positive, Strcon-Negative, ArfM-Positive, Fur-Negative, and CcpA-Negative. SigB, SigE, and SigF-dependent transcription was significantly activated by low oxygen supply. Through transcriptome profiling analysis, we found that the limitation of oxygen significantly enhanced glucose utilization while slowing the catabolic rate of amino acids (glutamate, histidine and arginine). These results provide new insights for a better elucidation of the signaling networks between DO level and adenosine yield, and provide guides for setting up optimized oxygen supply strategies. This work is also a good reference for most aerobic B. subtilis fermentation for increasing metabolite productivity. Inhibition of xanthine degradation is the reason that why low oxygen supply enhanced adenosine production. When grown under low oxygen supply, xanthine degradation was somehow inhibited. During the fermentation process, xanthine will be exhausted at the later period and adenosine production was inhibited. Inhibition of xanthine delayed the time point at which xanthine was exhausted, thus adenosine production was prolonged. However, why xanthine exhaustion limited adenosine yield is unknown. We hypothesized that the reason might be due to inhibition of adaptive reversion by xanthine. Since the producing strain (xanthine auxotroph) was selected by traditional method and no genetic manipulation was used, adaptive reversion of xanthine auxotroph that remove adenosine production capacity probably arise during adenosine fermentation. For nutrient auxotroph, nutrient sufficiency could inhibit adaptive reversion. Kodaira et al have reported that adaptive reversion of adenosineproducing strain was efficiently inhibited by guanine sufficiency [56]. As the xanthine degradation was inhibited under low oxygen supply, the time point at which xanthine was exhausted was probably delayed. So the adenosine-production period was prolonged. However, if xanthine could efficiently inhibit adaptive reversion need to be confirmed. Although the adenosine production was higher under low oxygen supply, enhancement of the carbon overflow pathway and inhibition of purine nucleotide synthesis were not conducive to adenosine production. Enhancement of carbon overflow pathway may probably reduce the carbon flow to PPP pathway, which generates ribose [57]. Herein, the citrate was used to inhibit carbon overflow and adenosine production was increased. Under the limitation of oxygen, many genes coding for oxidases, nitrate reductase and nitrite reductase were up-regulated. These enzymes all were involved in recycle from NADH to NAD+. Since the concentration of NAD(H) is almost constant in the bacteria [58], turnover of NAD+ is very important for continuously NAD+ supplying for bacteria. Induction of these genes demonstrate that NAD+ availability was probably restricted, so the bacteria tried to accelerate NAD+ recycling. This restriction of NAD+ might be due to limited oxygen supply, as it was demonstrated that oxygen limitation will increase NADH level in Bacillus subtilis and activates Rex-repressed genes, which are activated by high ratio of NADH/NAD+ [59]. The qoxABCD, ctaCDEF, cydAB,narGHI and nasDE are all directly or indirectly positively regulated by ResDE (Dbtbs), which may sense the state of menaquinone [60]. Induction of these genes might demonstrate that ResDE was activated. Consistently, the T-Profiler also revealed that ResDE was activated. Activation of ResDE also demonstrated that the bacteria were probably in a reduced state compared to the bacteria under high oxygen supply. So we might conclude that oxygen limitation restrict NAD+ turnover rate and resulted in a reduced state. In a reduced state (NAD(P)H sufficient), bacteria will prefer pathways that involve dehydrogenases that utilize NAD(P)H to keep redox balance [61]. The reduced state under oxygen limitation will probably restrict TCA cycle, which is the major pathway that generates NADH, to keep redox balance. In E. coli, a reduced state (increased QH2/Q) will activate ArcAB and repress the TCA cycle [62,63]. Although the genes in the TCA cycle was not repressed in this study, the genes rocG,rocAC,hutU,hutG which are involved in pathways that feed TCA cycle were all down-regulated. The acetoin-formation carbon overflow pathway, which was activated by glucose, was enhanced under low oxygen supply as alsSD were induced. Induction of carbon overflow pathway might refer to the reduced state of the bacteria grown under low oxygen supply. If the acetoin-formation overflow pathway was enhanced, carbon flow to TCA cycle might be reduced and additional NADH generation in the TCA cycle will be prevented. Strains Fermentation In this study, Bacillus subtilis ATCC 21616 (xanthine auxotrophic) was used. The starin was firstly grown on slant medium (10 g yeast extract, 10 ml corn syrup, 10 g peptone, 10 g agar powder per liter of distilled H 2 O; pH 7.0) for 48 h at 32uC. The strain was then transferred to seed medium (10 g yeast extract, 10 ml corn syrup, 10 g peptone, 10 g agar powder, 10 g sugar per liter of distilled H 2 O; pH 7.0), cultured on a shaker for 16 h at 32uC, 200 r/min. The seed was then inoculated into culture used for adenosine fermentation in a batch mode for 72 h. Two batches were conducted with two duplicate, one at high DO (agitation 700 r/min) and one at medium DO (agitation 450 r/min). The medium composition of the two batches was: 80 g glucose, 7.5 g yeast extract, 30 ml corn syrup, 10 g KH 2 PO 4 , 0.5 g MgSO 4 , 0.01 g MnSO 4 , 5 g NH 4 Cl, 10 g Monosodium glutamate (MSG) per liter distilled H 2 O, pH 7.5. The fermentation temperature was 32uC. Samples were respectively collected at 12 h and 18 h from two independent batches for transcriptome and metabolite analysis. Samples were quenched in liquid nitrogen immediately after collection, and then stored at 280uC. Analytical Method of Metabolites Intracellular amino acids: 1.0 ml of B. subtilis culture was immediately centrifuged for 1 min at 10000 g at 4uC. Metabolites from the cells pellets were extracted with 200 ml 5% HClO 4 in an ice bath for 15 min. After centrifugation at 10000 g for 5 min, the supernatant was neutralized with a K 2 CO 3 solution, and the KClO 4 precipitate was removed by centrifugation. The supernatant was stored at 220uC until use. The amino acids content was determined using an Amino acid Analyzer (L-8900 Hitachihitech, Japan) under the experimental conditions recommended for protein hydrolysates. Before analysis the sample was deproteinized by 10% Trichloroacetic acid (TCA) precipitation. Organic acids and nucleoside: Organic acids and nucleoside were determined using a Waters HPLC system with Breeze Data Processor (Waters Corp. Milford, Massachusetts, USA). The separation was carried out on the Agilent Zorbax SB-Aq column (250 mm64.6 mm, 5 mm) (Agilent, Palo Alto, CA, USA). A mobile phase of 0.01 mol/L H 2 SO 4 solution (pH 2.0) was used at a flow rate of 0.6 mL/min. The column temperature was maintained at 30uC and the injection volume was 10 ml, the detection wavelength was 210 nm. Microarray construction and Hybridization The Bacillus subtilis microarrays (BSU1.0) were customized using Agilent eArray program according to the manufacturer's recommendations. Each customized microarray (8615K) contained spots in triplicate with 4,106 gene-specific 60-mer oligonucleotides representing the 4,106 protein-coding genes in B. subtilis (as reported for the B. subtilis genome at http://genolist.pasteur.fr/ SubtiList/). Samples stored at 280uC were used for RNA isolation. Total RNA samples were isolated from cultures by using Tiangen reagent according to the manufacturer's instructions (http://www. tiangen.com/newEbiz1/EbizPortalFG/portal/html/index.html). Then the RNAs were subsequently purified by QIGEN RNeasy Mini kit. The quality and quantity were determined by nanodrop UV spectroscopy and analyzed on a RNA 6000 Nano Labchip using 2100 bioanalyzer (Agilent technologies). Two milligrams RNA of each sample were used for cDNA synthesis. Then cRNA was subsequently synthesized using aaUTP. The amplified cRNA was purified using QIAGEN RNeasy Mini kit and the quality and quantity was determined using spectrophotometer at 260 nm and 280 nm. The purified cRNA was then labeled by Cy5 UTP. The fluorescently labeled cRNA was purified using QIAGEN RNeasy Mini kit. The purified fluorescently labeled cRNA was fragmentated in fragmentation buffer (Agilent). The prepared-as cRNA of 55 ml was mixed in 55 ml GE Hybridization Buffer HI-RPM (Agilent). Hybridization was performed in an Agilent Microarray Hybridization Chamber (G2534A) for 16 h at 65uC at rotation of 10 r/min. After the hybridization, the slides were washed in Gene Expression Wash Buffer (Agilent). Microarrays were scanned using an Agilent G2565BA scanner at 5 mm resolution. All data is MIAME compliant and that the raw data has been deposited in gene expression pmnibus (GEO) database. Microarray Data analysis For data extraction, normalization and filtration, we used the methods as described in our previous work [19]. To identify differential expression genes responding to different DO, we used fold change method (2 fold as a cutoff value), considering the samples of high DO as control. Genes with fold change bigger than 2 were thought to be significantly disturbed genes responsive to oxygen supply. All differential expressing genes were uploaded for functional category analysis (http://mips.helmholtzmuenchen.de/proj/funcatDB/) with a threshold of 0.001 and Tprofiler analysis (http://www.science.uva.nl/,boorsma/t-profiler-bacillusnew/) [19]. Figure S1 DO and Adenosine Yield. The DO during the fermentation was monitored and depicted. And the final adenosine yield was also showed. (DOC) Table S1 Genes List of genes with two-fold change at 12 h. All genes that changed more than two-fold at 12 h were listed. The change folds were also showed. (XLS) Table S2 Genes List of genes with two-fold change at 18 h. All genes that changed more than two-fold at 18 h were listed. The change folds were also showed. (XLS) Table S3 MIPS functional analysis of the genes with two-fold change at 12 h and 18 h. Genes that changed more than two-fold were analyzed for enrichment of functional category online (http://mips.gsf.de/proj/biorel/bacillus_subtilis.html). (XLS) Text S1 Replicate and Reproducibility. Reproducibility of replicate experiments was discussed. (DOC) Text S2 The effect of xanthine addition on adenosine production. One-way Analysis of Variance (ANOVA) was used. (DOC)
2014-10-01T00:00:00.000Z
2011-05-18T00:00:00.000
{ "year": 2011, "sha1": "30ced70fe6649ec27dc2ec1fe5c4ac760e0f2ab2", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0020092&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30ced70fe6649ec27dc2ec1fe5c4ac760e0f2ab2", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
256682900
pes2o/s2orc
v3-fos-license
Hyperactivation of HER2-SHCBP1-PLK1 axis promotes tumor cell mitosis and impairs trastuzumab sensitivity to gastric cancer Trastuzumab is the backbone of HER2-directed gastric cancer therapy, but poor patient response due to insufficient cell sensitivity and drug resistance remains a clinical challenge. Here, we report that HER2 is involved in cell mitotic promotion for tumorigenesis by hyperactivating a crucial HER2-SHCBP1-PLK1 axis that drives trastuzumab sensitivity and is targeted therapeutically. SHCBP1 is an Shc1-binding protein but is detached from scaffold protein Shc1 following HER2 activation. Released SHCBP1 responds to HER2 cascade by translocating into the nucleus following Ser273 phosphorylation, and then contributing to cell mitosis regulation through binding with PLK1 to promote the phosphorylation of the mitotic interactor MISP. Meanwhile, Shc1 is recruited to HER2 for MAPK or PI3K pathways activation. Also, clinical evidence shows that increased SHCBP1 prognosticates a poor response of patients to trastuzumab therapy. Theaflavine-3, 3’-digallate (TFBG) is identified as an inhibitor of the SHCBP1-PLK1 interaction, which is a potential trastuzumab sensitizing agent and, in combination with trastuzumab, is highly efficacious in suppressing HER2-positive gastric cancer growth. These findings suggest an aberrant mitotic HER2-SHCBP1-PLK1 axis underlies trastuzumab sensitivity and offer a new strategy to combat gastric cancer. Resistance to Trastuzumab in HER2 gastric cancer patients remains a clinical challenge. In this study, the authors demonstrate that HER2 promotes tumorigenesis in gastric cancer by regulating mitotic progression through a Shc1-SHCBP1-PLK1-MISP axis and they identify a compound, TFBG, able to disrupt SHCBP1/PLK1 interaction and to synergize with trastuzumab. H uman epidermal growth factor receptor 2 (HER2) is often amplified or overexpressed in gastric carcinoma 1 . Through dimerization with other HER members, HER2 activates downstream pathways, including the mitogen-activated protein kinase (MAPK) and phosphatidylinositol 3-kinase (PI3K) pathways, and promotes tumorigenesis by increasing cell proliferation, metastasis, and invasion 2 . A lot of HER2-directed therapies have been used in the treatment of HER2-positive cancers. Trastuzumab, pertuzumab, lapatinib, and T-MD1 are used in HER2-positive breast cancer 3 . Trastuzumab and trastuzumab deruxtecan are effective anti-HER2 therapy showing survival benefit in gastric cancer [4][5][6] . In the trastuzumab for gastric cancer (ToGA) trial, the overall survival (OS) of patients with HER2-positive advanced gastric cancer was improved 2.7 months when trastuzumab was combined with conventional chemotherapy, compared to chemotherapy alone. These results led to the approval trastuzumab, which is now the first-line treatment in combination with oxaliplatin or fluorouracil chemotherapy for patients with HER2-positive metastatic gastric cancer 4 . However, many patients with HER2-positive gastric cancer still succumb to their disease following trastuzumab therapy, one of the main reasons is the intrinsic and secondary resistance. The mechanisms underlying insufficient sensitivity of HER2directed therapy are proposed to be aberrant activation of HER2 and downstream signaling, including amplification, upregulation, or mutation of HER2, KRAS, PIK3CA, AKT, and PTEN, which make it difficult to inhibit the activation of downstream signaling and cell growth, using trastuzumab only 7,8 . HER2-positive gastric cancer has been found to share some of these mechanisms, but also manifests specific mechanisms of resistance to trastuzumab. For example, intratumoral HER2 heterogeneity is more frequent in gastric cancer than in breast cancer, with values ranging from 23 to 79% (ref. 9 ). Besides, loss of HER2 protein expression in pretreatment and posttreatment of gastric cancer patients is also a main reason of trastuzumab resistance 10 . Other proposed resistance mechanisms include alterations in HER2 downstream signaling and bypass pathways, such as upregulation of kallikrein 10 (KLK10), metastasis associated in colon cancer 1 (MACC1), C-Maf-inducing protein (CMIP), hyperactivation of HER4-YAP1 axis, TNF ɑ-induced mucin 4 (MUC4) overexpression [11][12][13][14][15] . To overcome drug resistance and sensitize cells to trastuzumab, new therapeutic agents or combination therapies have recently emerged, such as afatinib (HER1, HER2, and HER3 inhibitor), trastuzumab deruxtecan (topoisomerase I inhibitor), and PI3K or MAPK pathway inhibitors [16][17][18][19] . Although trastuzumab deruxtecan has been proved resulting a survival benefit in patients with HER2-positive gastric cancer, no other new data have been obtained to date, and none of the new anti-HER2 treatment strategies improved survival significantly enough to justify registration 6,20 . Thus, identifying undiscovered molecular mechanisms underlying HER2-promoted tumorigenesis and developing new drugs to arise trastuzumab sensitivity are critical. HER2 and other epidermal growth factor receptors (ERBBs) always employ Shc1, an important intracellular scaffold protein, to recruit cytoplasmic targets and amplify downstream signals to activate the MAPK and PI3K/AKT pathways. Importantly, Shc1 is a hub that binds with multiple interactors dynamically to direct the temporal flow of signaling information, following growth factor stimulation 21 . Here, we report that, in addition to the canonical MAPK and PI3K downstream signaling pathways, HER2 promotes tumorigenesis by direct regulation of mitotic progression through a crucial Shc1-SHCBP1-PLK1-MISP axis, which drives the sensitivity of HER2-positive cells to trastuzumab. Theaflavine-3, 3′-digallate (TFBG) can selectively inhibit SHCBP1-PLK1 complex and render gastric cancer sensitive to trastuzumab. Targeted TFBG treatment combined with trastuzumab exhibits substantial growth inhibition and tumor regression, indicating potential clinical applications in HER2positive gastric cancer therapy. Results SHCBP1, a Shc1-binding protein, is the downstream effector of HER2. To gain insight into the subset of Shc1-binding proteins and identify the undiscovered downstream axis of HER2, which is involved in gastric tumorigenesis, we mapped the binding proteins associated with Shc1 and screened for the interactors that were correlated with HER2 overexpression and were potential upregulated oncogenes in gastric cancer. First, we identified the HER2-positive gastric cancer cell lines NCI-N87 and SNU-216 using western blot, immunofluorescence (IF), immunohistochemistry (IHC), and fluorescence in situ hybridization ( Supplementary Fig. 1a, b). Then, we engineered an SNU-216 cell line to stably express Flag-tagged Shc1 and immunoprecipitated Flag-Shc1, following epidermal growth factor (EGF) stimulation. Using liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis, we identified 32 Shc1binding partners, including EGFR, HER2, HER3, JUP, and SHCBP1 (Fig. 1a). To screen out if any of the binding proteins are potential HER2 downstream regulators, we carried out a gene expression correlation analysis between the identified Shc1 interactors and HER2 expression in 659 gastric cancer specimens obtained from the Gene Expression Omnibus (GEO) database, and 24 HER2-correlated Shc1-binding proteins with spearman coefficient ≥0.3 were screened out (Fig. 1b). After that, we determined if any of the identified binding proteins were potential upregulated oncogenes involved in gastric tumorigenesis. A gene expression profile of gastric cancerous and adjacent normal samples from 16 patients were performed, using mRNA microarray. We screened out five overlapping Shc1-binding proteins (JUP, EPHA2, RASAL2, LYN, and SHCBP1), which were positively correlated with HER2 expression and were upregulated in gastric cancer (Fig. 1c, d). Finally, to confirm our screening results, the mRNA expression of the identified Shc1-binding proteins in HER2-positive gastric cancer patients were detected using real-time PCR (RT-PCR). We found that SHCBP1 and RASAL2 were really upregulated in HER2-positive gastric cancer (Fig. 1e). Of the two binding proteins, we focused on SHCBP1 as it was previously reported interacting with Shc1 prior to EGF stimulation and the role of SHCBP1 in HER2mediated signal activation was elusive 21 . To determine the screening results for SHCBP1 binding with Shc1, we transduced the cells expressing Flag-Shc1 with an HAtagged SHCBP1 vector, and their interactions following EGF stimulation were detected, using co-immunoprecipitation and immunoblotting. We found that SHCBP1 was associated with Shc1 prior to stimulation, but was displaced following EGF treatment (Fig. 1f). In addition, using fluorescence resonance energy transfer (FRET) and immunofluorescent colocalization analysis, we showed that EGF-induced Shc1 disassociating from SHCBP1 and binding to HER2 (Fig. 1g, h and Supplementary Fig. 1c). To examine whether EGF-induced SHCBP1 and Shc1 dissociation is a cascade of HER2 activation, we pretreated cells with trastuzumab to block HER2 activation and observed that the displacement between Shc1 and SHCBP1 in response to EGF stimulation was abolished (Fig. 1i). These results were further validated by FRET analysis, demonstrating that the Shc1-SHCBP1 interaction decreased with exposure to EGF and that these effects were relieved by trastuzumab treatment (Fig. 1j). In conclusion, these results suggest that SHCBP1 is a downstream effector of HER2 and may contribute to the regulation of a novel oncogenic signaling axis in response to HER2 activation in gastric cancer. SHCBP1 is upregulated in human gastric cancer and correlates with drug sensitivity in patients subjected to trastuzumabbased therapy. To verify whether SHCBP1 is upregulated in gastric cancer and has clinical relevance with HER2 amplification, we performed IHC analysis of a tissue microarrays (TMAs), including 223 paired gastric cancerous and adjacent normal tissues, the results indicated that SHCBP1 was indeed highly expressed in gastric cancer samples (Fig. 2a, b). This was also confirmed by SHCBP1 immunoblotting of tissues from eight gastric cancer patients (Fig. 2c) and SHCBP1 expression analysis of a publicly available GEO dataset (Fig. 2d). We performed Fig. 2e, f). To further characterize the clinical importance of SHCBP1, we evaluated patient OS from the gastric cancer TMAs and public databases, respectively. Interestingly, high expression of SHCBP1 (H-score ≥ 70) was positively associated with worse OS in HER2-positive patients than in HER2negative patients ( Fig. 2g and Supplementary Fig. 2a). In addition, we also assessed the relationship between SHCBP1 and the clinicopathological characteristics of HER2-positive patients from the gastric cancer TMAs, and found a significant correlation between SHCBP1 and tumor invasion, lymph node status and tumor stage ( Supplementary Fig. 2b). Furthermore, the univariate and multivariate analyses of patients from the gastric cancer TMAs demonstrated SHCBP1 expression was an independent prognostic factor for HER2-positive gastric cancer patients (Supplementary Table 1). To investigate whether upregulated SHCBP1 confers trastuzumab sensitivity, we examined SHCBP1 status by IHC in 22 HER2-positive gastric cancer patients who received trastuzumabbased therapy (Supplementary Table 2). For these cases, patients without the evidence of disease for over 2 years after treatment with trastuzumab were considered "sensitive" and patients with death related to disease recurrence were deemed "resistant". Compared with the "sensitive" group, SHCBP1 is significantly higher in the "resistant" group ( Fig. 2h, i). Furthermore, we evaluated the prognostic value of SHCBP1 in OS, showing that high SHCBP1 expression (H-score ≥ 70) correlates with a shorter OS time in cancer patients who received trastuzumab-based therapy (Fig. 2j). Collectively, these findings demonstrate that the initial SHCBP1 expression significantly correlates with trastuzumab sensitivity, unraveling the clinical importance of SHCBP1 in HER2-targeted therapy for gastric cancer. SHCBP1 contributes to trastuzumab sensitivity in HER2positive gastric cancer. Given these findings, we hypothesized that SHCBP1 is a critical driver of HER2-mediated cell proliferation and is associated with gastric cancer sensitivity to trastuzumab. To explore this, we knocked down SHCBP1 expression by two different shRNAs targeting SHCBP1 in SNU-216 and NCI-N87 cells ( Supplementary Fig. 3a). SHCBP1 depletion significantly suppressed cell proliferation in both NCI-N87 and SNU-216 cells (Fig. 3a, b). We also detected the dose-dependent growth inhibition of shCtrl and SHCBP1 depletion cells in response to trastuzumab, which revealed that trastuzumab treatment caused a dose-dependent decrease in cell proliferation and knockdown of SHCBP1 significantly sensitized cells to trastuzumab both in NCI-N87 and SNU-216 cells (Fig. 3c, d). These results were further confirmed by a long-term colony formation assay (Fig. 3e). We also engineered cell lines to overexpress SHCBP1, and performed proliferation and colony formation assays, and these studies indicated that SHCBP1 overexpression reduced the sensitivity to trastuzumab in both NCI-N87 and SNU-216 cells (Fig. 3f-h). For in vivo investigation, we established xenografts of NCI-N87 cells in nude mice and tested the efficacy of trastuzumab against tumor xenografts. As predicted by the in vitro studies, we found that trastuzumab treatment moderately slowed tumor growth with shCtrl cells, while significantly reduced tumor growth in SHCBP1-depleted cell xenograft models ( Fig. 3i and Supplementary Fig. 3c, d). Consistently, IHC staining analysis of Ki-67 showed that SHCBP1 knockdown significantly enhanced trastuzumab suppression of cellular proliferation (Fig. 3j), underscoring that SHCBP1 depletion renders HER2-positive gastric cancer sensitive to trastuzumab both in vitro and in vivo. Trastuzumab combined with an SHCBP1 inhibitor may be an effective therapeutic strategy for HER2-positive gastric cancer. Nuclear localization of SHCBP1 contributes to HER2mediated cell proliferation. To explore the precise roles of SHCBP1 in trastuzumab sensitivity regulation since it departed from Shc1 following HER2 activation, we detected the subcellular localization of SHCBP1 throughout the cell division cycle in synchronized cells. Interestingly, SHCBP1 was dynamically localized to various subcellular structures during the successive steps of mitotic division. Anti-SHCBP1 antibodies produced weak, mostly cytoplasmic staining of interphase cells, but strong staining of cells in S phase and the mitotic phase. It was diffusely localized to the nuclear region during S phase and to the spindle in prometaphase and metaphase. Progressively, SHCBP1 accumulated at the central spindle during late anaphase and finally at the midbody during cytokinesis (Fig. 4a). These results suggest that SHCBP1 may translocate into the nucleus and then act as an essential mitotic component to regulate cell division. Subsequently, we assessed whether SHCBP1 nuclear localization is a signaling cascade following HER2 activation. IF staining and nucleoprotein immunoblotting analysis demonstrated that EGF induced dramatic nuclear localization of SHCBP1 in both NCI-N87 and SNU-216 cells, compared with EGF-free cells (Fig. 4b-d Fig. 1 Identification of SHCBP1 as a downstream effector of HER2. a Shc1-binding proteins in SNU-216 cells identified using liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis. b Gene expression correlation of Shc1-binding proteins and HER2 in 659 gastric cancer tissues from GEO data (GSE62254, GSE15459, GSE34942, and GSE54129). Representative scatter plot of SHCBP1 and HER2 is shown. The p values were determined by two-sided Spearman's rank correlation test (n = 659 independent biological samples). c Gene expression profiles of the cancer and corresponding adjacent normal tissues from 16 patients detected using mRNA microarrays. The upregulated Shc1-binding proteins are marked in red and the representative raw count of SHCBP1 is shown. d A Venn diagram showing the overlap of Shc1-binding proteins, cancer upregulated proteins, and proteins correlated with HER2 expression. e mRNA expression of identified Shc1-binding proteins (SHCBP1, JUP, LYN, EPHA2, and RASAL2) in seven patients with HER2-positive gastric cancer detected by real-time PCR. Data are the mean ± standard error of the mean (s.e.m). The p values were determined by paired two-sided Student's t test or nonparametric test (n = 7 independent biological samples). f Co-immunoprecipitation assays of Flag-Shc1 together with HA-SHCBP1 in SNU-216 cells treated with 100 ng/mL epidermal growth factor (EGF) for the indicated times. IP immunoprecipitation, WCL whole cell lysates. g Fluorescence resonance energy transfer (FRET) assay of eCFP-Shc1 and eYFP-SHCBP1 in cells treated with 100 ng/mL EGF for the indicated times. Data are the mean ± s.e.m. The p values were determined by repeated measures one-way ANOVA (n = 105 independent cells per group). h Immunofluorescence colocalization of Shc1 and HER2 in SNU-216 cells treated with/without 100 ng/mL EGF. Cells were immunostained with anti-ERBB2 antibody (red), anti-Shc1 antibody (green), and DAPI (blue). i Co-immunoprecipitation assays of Flag-Shc1 together with HA-SHCBP1 treated with EGF and/or trastuzumab (Trast). j FRET assays of eCFP-Shc1 and eYFP-SHCBP1 in cells treated with EGF and/or trastuzumab. Data are the mean ± s.e.m. The p values were determined by two-sided nonparametric test (n = 130 independent cells per group). Data of f and i are representative of at least two independent experiments. Data of h are representative of at least three independent experiments. and Supplementary Fig. 4a, b). Moreover, blocking HER2 activation using trastuzumab treatment effectively abolished EGFinduced nuclear translocation (Fig. 4e, f and Supplementary Fig. 4c). These data suggest that SHCBP1 nuclear localization is a downstream consequence of HER2 activation. To characterize the potential mechanism underlying SHCBP1 nuclear localization, we engineered SNU-216 cells stably expressing Flag-tagged SHCBP1 deletion mutants to identify the core fragments attributed to SHCBP1 nuclear translocation. IF and immunoblotting analyses showed that the nuclear localization of the 291-562 aa fragment was significantly weaker than that of the 64-562 aa fragment, indicating that the fragment containing amino acids 64-291 was required for nuclear localization (Fig. 4g) serves for SHCBP1 nuclear translocation, we purified Flag-tagged SHCBP1 using co-immunoprecipitation following EGF treatment and analyzed the site modification by LC-MS/MS. A prominent phosphorylation site on serine 273 of SHCBP1 was identified ( Fig. 4h) and further validated by mutagenesis analysis. We constructed a mutant SHCBP1 (SHCBP1 S273A ), and SHCBP1 phosphorylation was detected using immunoblotting with an antiphosphoserine (pSer) antibody. Mutagenesis analysis confirmed that EGF induced serine phosphorylation on SHCBP1 WT , but to a lesser extent on the SHCBP1 S273A mutant (Fig. 4i). Similarly, IF and immunoblotting analyses indicated that the SHCBP1 S273A mutant effectively blocked SHCBP1 nuclear translocation, but had no effect on EGF-induced SHCBP1-Shc1 dissociation ( Fig. 4j and Supplementary Fig. 4d). Moreover, we also mutated the Ser273 of SHCBP1 to Asp273, which confirmed that SHCBP1 S273D mutant significantly inhibited SHCBP1 nuclear translocation (Supplementary Fig. 4d, e). These results suggested that phosphorylation at the S273 site was indispensable for SHCBP1 nuclear localization. Then, we validated the necessity of SHCBP1 nuclear localization in HER2-mediated cell proliferation. We reexpressed SHCBP1 WT and SHCBP1 S273A in SHCBP1 knockdown cells and detected the trastuzumab half-maximal inhibitory concentration (IC 50 ) in the cells. Reexpression of SHCBP1 WT reversed the sensitization effect of SHCBP1 knockdown on trastuzumab (IC 50 = 84.04 μg/mL), while SHCBP1 S273A reexpression did not (IC 50 = 22.47 μg/mL, Fig. 4k). Collectively, these data support our hypothesis that SHCBP1 nuclear localization is a downstream consequence of HER2 activation and contributes to HER2mediated cell proliferation. HER2 mediates cell mitotic progression by activating SHCBP1-PLK1-MISP signaling pathway. After SHCBP1 nuclear translocation, we speculated that SHCBP1 may be a mitotic protein involved in cell mitotic progression. We colocalized SHCBP1 with the centrosome and spindle, the results showed that SHCBP1 localized to the spindle poles in metaphase and to the midbody in cytokinesis ( Supplementary Fig. 5a). These results suggest that SHCBP1 is a mitotic regulator, and plays pleiotropic roles in both metaphase and cytokinesis. Previous studies demonstrated that SHCBP1 regulates cytokinesis completion by the interaction with centralspindlin complex composed of MKLP1 and MgcRacGAP 22,23 , but the role of SHCBP1 in metaphase is still unknown. To further investigate the mechanisms of how SHCBP1 contributes to cell division in metaphase, we knocked down the SHCBP1 expression in SNU-216 cells and detected the changes of cell mitotic progression. Cell cycle assay using flow cytometry showed that SHCBP1 depletion induced a significant G2/M arrest ( Supplementary Fig. 5b, c). We monitored the division of SHCBP1 knockdown cells by time-lapse microscopy and observed an obvious delay in mitotic progression ( Supplementary Fig. 5d). Furthermore, close inspection of the mitotic spindle stained with anti-α-tubulin and the centrosomal marker γ-tubulin showed a marked increase of defective spindle and multipolar spindle in SHCBP1 knockdown cells (Supplementary Fig. 5e, f). These results suggested that SHCBP1 is essential for proper spindle formation in mitotic progression during metaphase. Then, to investigate the detailed mechanisms of how SHCBP1 contributes to cell division in metaphase, we synchronized cells stably expressing Flag-tagged SHCBP1 to the mitotic phase using nocodazole (NOC), and identified SHCBP1-interacting proteins using immunoprecipitation and LC-MS/MS analysis (Supplementary Fig. 6a). We readily detected two mitotic proteins, PLK1 (polo-like kinase 1) and MISP (mitotic interactor and substrate of PLK1) as prominent SHCBP1-interacting proteins, and the interactions were confirmed by co-immunoprecipitation performed on NOC-arrested SNU-216 cells (Supplementary Fig. 6b-d). Interestingly, MISP has been reported as a PLK1 substrate required for proper spindle orientation and mitotic progression 24 . These findings directed our focus to the formation of SHCBP1-PLK1-MISP complex. We co-expressed Flag or HAtagged SHCBP1, PLK1, and/or MISP in HEK293T cells, and coimmunoprecipitation assays showed that SHCBP1-PLK1-MISP complex formation could readily be observed in M phase (Fig. 5a). In a yeast two-hybrid assay, SHCBP1 interacted directly with PLK1 but not MISP, while PLK1 interacted with both SHCBP1 and MISP (Fig. 5b). The interactions were also supported by a GST pulldown assay, demonstrating that PLK1 was an intermediate protein connecting SHCBP1 and MISP for the protein complex (Fig. 5c). Subcellular colocalization of the SHCBP1-PLK1-MISP complex revealed that the complex was specifically enriched at spindle poles in metaphase (Fig. 5d). Both SHCBP1 and PLK1 accumulated at the midbody in the cytokinesis phase, but PLK1 was located at the ends, while SHCBP1 was located at the center, suggesting that the SHCBP1-PLK1-MISP complex contributes to cell division in metaphase instead of the cytokinesis phase. Based on above findings, we reasoned that SHCBP1 may interact with PLK1 to promote PLK1 kinase activity and enhance MISP phosphorylation for cell division. To confirm that MISP is a substrate of PLK1, we synchronized cells to metaphase using NOC and observed the presence of a slower migrating MISP band in western blot, which was attributed to its phosphorylation as it and SHCBP1 immunohistochemical (IHC) staining in gastric cancer and corresponding adjacent normal tissues. b Quantification of SHCBP1 IHC analysis of human gastric tissue microarrays (TMAs) from 223 patients, H-score histoscore. The p values were determined by two-sided nonparametric test (n = 223 independent biological samples). c SHCBP1 immunoblotting analysis of gastric cancer (GC) and corresponding adjacent normal (AN) samples from eight patients. d Quantification of SHCBP1 expression in gastric cancer and normal specimens obtained from the GEO database (GSE66229 and GSE54129). The p values were determined by two-sided nonparametric test (n = 411 in cancer group and 121 in normal group, independent biological samples). The box plots denote medians (center lines), 25th and 75th percentiles (bounds of boxes), and minimum and maximum (whiskers). e Representative H&E, IHC, and immunofluorescence staining of SHCBP1 and HER2 in gastric cancer TMAs. f Scatter plots of HER2 versus SHCBP1 H-score in the human gastric TMAs. The p values were determined by two-sided Spearman's rank correlation test (n = 223 independent biological samples). g Kaplan-Meier plot of the correlation between SHCBP1 expression and patient overall survival using the human gastric cancer TMAs. The H-score < 70 is divided as SHCBP1 low expression group and H-score ≥ 70 as SHCBP1 high expression group. HR hazard ratio. The p values were determined by log-rank test (n = 223 independent biological samples). h, i SHCBP1 IHC staining and the quantification results in gastric cancer from patients subjected to trastuzumab-based therapy. Data are the mean ± s.e.m. The p values were determined by two-sided nonparametric test (n = 7 in sensitive group and six in resistant group, independent biological samples). j Kaplan-Meier plot of the correlation between SHCBP1 expression and overall survival of patients subjected to trastuzumab-based therapy. The H-score < 70 is divided as SHCBP1 low expression group and H-score ≥ 70 as SHCBP1 high expression group. The p value was determined by log-rank test (n = 22 independent biological samples). Data of a, e and h are representative of at least three independent experiments. Data of c are representative of two independent experiments. disappeared in response to calf intestinal phosphatase treatment. Overexpressed PLK1 led to a pronounced increase in the slower migrating MISP band, suggesting that PLK1 catalyzed MISP phosphorylation (Fig. 5e). Moreover, in vitro kinase assays with recombinant MISP and PLK1 were performed, and strong phosphorylation of MISP by PLK1 was detected (Fig. 5f). In order to better understand the role of SHCBP1 in the process of PLK1 phosphorylating MISP, we detected the interaction of MISP and PLK1 in SHCBP1 knockdown cells, indicating that SHCBP1 depletion reduced the binding of PLK1 and MISP (Fig. 5g). Consistently, PLK1-induced MISP phosphorylation was also diminished in SHCBP1 knockdown cells, but reexpression of SHCBP1 reversed this inhibitory effect (Fig. 5h, i), suggesting that the activity of PLK1 toward MISP is greatly facilitated by SHCBP1. We further investigated whether SHCBP1-PLK1 facilitated MISP phosphorylation is a downstream consequence of HER2 activation and found that EGF exposure gradually induced MISP phosphorylation, and these were blocked by trastuzumab treatment, suggesting that HER2 is indispensable in MISP phosphorylation ( Supplementary Fig. 6e, f). To determine whether MISP phosphorylation is essential for HER2-mediated cell proliferation and trastuzumab sensitivity, we sought to identify potential phosphorylation sites of PLK1 on MISP, using MS analysis. Ten phosphorylation sites of MISP were identified ( Supplementary Fig. 5g), and previous studies have shown that three of them were phosphorylated by PLK1 and four by cyclin-dependent kinase 1 (CDK1) 24 . Mutations of three PLK1 phosphorylated residues, S394, S395, and S397, to nonphosphorylatable alanine (MISP-3A) significantly reduced MISP phosphorylation (Fig. 5j). Then, MISP was knocked down by two different shRNAs ( Supplementary Fig. 3b) and MISP-WT or MISP-3A were reexpressed into the knocked down cells. Colony formation assay showed that MISP depletion significantly inhibited the cell proliferation and these were reversed by reexpression of MISP-WT instead of MISP-3A ( Supplementary Fig. 6h). Also, detection of trastuzumab IC 50 showed that MISP knockdown significantly sensitized cells to trastuzumab (Fig. 5k). Reexpression of MISP WT into MISPdepleted cells reversed the sensitization effect (IC 50 = 106.84 μg/ mL), while MISP 3A reexpression did not (IC 50 = 40.18 μg/mL, Fig. 5l). Taken together with the above data, we conclude that nuclear SHCBP1 contributes to cell mitosis through binding with PLK1 to promote the phosphorylation of the mitotic interactor MISP, which is the downstream consequence of HER2 activation. HER2 can mediate cell proliferation by activating a Shc1-SHCBP1-PLK1-MISP signaling pathway, which is responsible for trastuzumab sensitivity. The mode of SHCBP1-PLK1 interaction. Having found that HER2 is involved in cell mitotic promotion by activating Shc1-SHCBP1-PLK1-MISP pathway to regulate trastuzumab sensitivity, we posited that pharmacologic inhibitors targeting the SHCBP1-PLK1 may sensitize cells to trastuzumab. To develop a functional inhibitor blocking SHCBP1-PLK1 binding, we sought to identify an SHCBP1 peptide fragment capable of binding to a PLK1 protein. Domain truncation experiments and coimmunoprecipitation revealed that the SHCBP1 355-562 aa residues were required for binding to PLK1 (Fig. 6a). Correspondingly, the domain of PLK1 interacted with SHCBP1 was also determined, and the polo-box domain (PBD), instead of the kinase domain, was found to bind with SHCBP1 (Fig. 6b). This binding was further confirmed by a yeast two-hybrid assay ( Supplementary Fig. 7a). After that, we employed the computational protein-protein docking algorithm (ZDOCK) 25 to predict the bound conformation of the PLK1 PBD domain (PBD ID code 1UMW) 26 and SHCBP1 355-562 aa domain, whose structure was predicted by homologous modeling, using an I-TASSER server 27 (Supplementary Fig. 7b). Following ZDOCK analysis, the top-ranked bound conformation for the SHCBP1-PLK1 complex was selected, and 12 core amino acids on the binding surface of the mode were predicted (Fig. 6c, d and Supplementary Fig. 7c, d). To validate the PLK1-SHCBP1binding mode, we designed protein deletion mutations and amino acid mutations of PLK1 containing the 12 core amino acids inside the contact surface. A co-immunoprecipitation assay between SHCBP1 and PLK1 mutations showed that mutations containing four core amino acids (K474S, Y485F, H489N, and L490A) significantly blocked the binding of PLK1 and SHCBP1 (Fig. 6e, f). These results confirmed the mode of the SHCBP1-PLK1 complex, in which K474, Y485, H489, and L490 were core amino acids inside the contact surface. Natural product TFBG is an inhibitor of SHCBP1-PLK1 interaction. Given our findings of the SHCBP1-PLK1 interaction mode, we sought to identify small molecules targeting the PLK1-SHCBP1 interaction. By analyzing the SHCBP1-PLK1binding mode with MOE-Site Finder software, an inhibitor binding pocket of PLK1 (pocket 5) that covered the four core amino acids of SHCBP1-PLK1 complex was found on the binding surface of the complex ( Supplementary Fig. 7e). Then, using virtual screening (VS) of diverse chemical libraries consisting of 17,676 small molecules, we initially identified 40 compounds that bind to pocket 5 of PLK1 (Supplementary Fig. 7f and 8a), and the compounds were further screened by surface plasmon resonance (SPR) technology ( Supplementary Fig. 8b, c). Finally, a natural product TFBG, which selectively and efficiently targeted PLK1, was identified according to its minimum K d value (Fig. 6g). SPR kinetic analyses showed that the K d value of TFBG against PLK1 was 4.67 × 10 −7 M (Fig. 6h), and the binding affinity was confirmed by a microscale thermophoresis (MST) assay ( Supplementary Fig. 8d). To detect the inhibitory effect of TFBG on the SHCBP1-PLK1 interaction, co-immunoprecipitation between SHCBP1 and PLK1 following TFBG exposure indicated that the SHCBP1-PLK1 interaction decreased, with increasing concentrations of TFBG ( Supplementary Fig. 8e). FRET assays were conducted and showed that TFBG inhibits the SHCBP1-PLK1 interaction with increasing concentrations and exposure time (Fig. 6i, j). The inhibitory effect of TFBG on the SHCBP1-PLK1 interaction was also supported by a GST pulldown assay, demonstrating that TFBG inhibited the binding of SHCBP1 with PLK1 in vitro ( Supplementary Fig. 8f). To Growth curves of NCI-N87 and SNU-216 shCtrl cells or cells with SHCBP1 knockdown. Data are the mean ± s.e.m. The p values were determined by two-sided nonparametric test and one-way ANOVA (n = 6 independent biological samples). c, d Sensitivity to trastuzumab (Trast) in NCI-N87 and SNU-216 Ctrl cells or SHCBP1 knockdown cells. Cells was treated with trastuzumab at the indicated concentrations for 6 days and subjected to a cell viability assay. Data are the mean ± s.e.m. The p values were determined by one-way ANOVA (n = 6 independent biological samples). e Colony formation and the statistical results of shCtrl cells and SHCBP1 knockdown cells in the continuous presence or absence of trastuzumab; Data are the mean ± s.e.m. The p values were determined by one-way ANOVA (n = 3 independent biological samples). f, g Sensitivity to trastuzumab in wild type (WT) or SHCBP1-overexpressed (SHCBP1 +/+ ) NCI-N87 and SNU-216 cells. Cells were treated with trastuzumab at the indicated concentrations for 6 days and subjected to a cell viability assay; data are the mean ± s.e.m. The p values were determined by two-sided Student's t test (n = 6 independent biological samples). h Colony formation and the statistical results of WT cells and SHCBP1-overexpressed cells in the continuous presence or absence of trastuzumab; data are the mean ± s.e.m. The p values were determined by oneway ANOVA (n = 3 independent biological samples). i Representative images (left) and tumor growth curve (right) of trastuzumab-treated xenograft mice carrying shCtrl or SHCBP1 knockdown NCI-N87 cell xenografts. Mice were administered with trastuzumab (intraperitoneal, 10 mg/kg, 2×/wk × 3) when tumors reached 100 mm 3 in size; data are the mean ± s.e.m. The p values were determined by two-sided nonparametric test (n = 8 independent mice per group). j Representative images (left) of Ki-67 immunohistochemical analysis and quantification (right) of Ki-67-positive cells of tumors from (i); data are the mean ± s.e.m. The p values were determined by one-way ANOVA (n = 3-6 independent biological samples). determine the inhibitory effect of TFBG on cell proliferation, we assessed the IC 50 of the inhibitor, indicating a significant proliferation inhibition both in SNU-216 cells (IC 50 = 23.42 μM, Fig. 6k) and NCI-N87 cells (IC 50 = 26.03 μM, Supplementary Fig. 8g). Taken together, these data demonstrate that TFBG is a selective inhibitor targeting the PLK1-SHCBP1 interaction and displays the potential for use as an anticancer drug. (Fig. 7a, b). The addition of 10 μM TFBG resulted in a decrease in trastuzumab IC 50 from 142.04 to 32.34 μg/mL, and the addition of 20 μM TFBG reduced the IC 50 to 7.29 μg/mL in SNU-216 cells. Using the Chou and Talalay method, we confirmed that TFBG and trastuzumab in combination exerted a synergistic response in cells with a combination index (CI) < 1 (Supplementary Fig. 9a). A similar combinatorial effect was confirmed by colony formation assays of SUN-216 and NCI-N87 cells (Fig. 7c, d). For in vivo experiments, we established xenografts of NCI-N87 cells in nude mice and began treatment once tumors reached 100 mm 3 in size with trastuzumab (intraperitoneal, 10 mg/kg, 2×/wk × 3), TFBG (intraperitoneal, 50 mg/kg, 1×/day × 21), or a combination of the agents. We found that both trastuzumab and TFBG monotherapies moderately slowed tumor growth and did not induce regression. However, combination therapy significantly delayed tumor growth more than either single agent and induced dramatic tumor regression ( Fig. 7e and Supplementary Fig. 9b-d). Hematoxylin-eosin and immunohistochemical analysis of Ki-67-positive cells in mouse xenograft tumors confirmed that combination therapy reduced the rates of cell proliferation more significantly than either single agent (Fig. 7f). We also treated the NCI-N87 tumors using TFBG subcutaneously at low doses (2.5 mg/kg, 1×/day × 21), confirming that the trastuzumab-TFBG combination suppressed tumor growth efficiently and induced tumor regression dramatically (Supplementary Fig. 9e-g). Finally, we validated whether SHCBP1-PLK1-MISP signaling was blocked in tumors treated by trastuzumab-TFBG combination. SHCBP1 IF staining of tumor tissues demonstrated that TFBG and trastuzumab combination effectively blocked SHCBP1 nuclear translocation (Fig. 7g, h). MISP immunoblotting showed that both trastuzumab and TFBG monotherapies inhibited the MISP phosphorylation and combination therapy significantly enhanced this inhibition (Fig. 7i). To further investigate whether TFBG blocking SHCBP1-PLK1 interaction has feedback effects on the upstream Shc1-SHCBP1 bindings, we detected the Shc1-SHCBP1 interactions after 5 and 10 μM TFBG treatment. We found that TFBG partly suppressed EGF-induced Shc1 and SHCBP1 dissociation, which suggested that the feedback inhibition on Shc1-SHCBP1 of TFBG is one of the reasons why TFBG sensitize gastric cancer to trastuzumab (Supplementary Fig. 9h). Together, these findings suggested that the combination of TFBG and trastuzumab is efficient in treatment of HER2-positive gastric cancer by blocking HER2-SHCBP1-PLK1-MISP signaling. TFBG is a promising treatment strategy for sensitizing gastric cancer cells and improving trastuzumab efficacy to battle HER2-positive gastric cancer. Discussion Trastuzumab is one of the approved anti-HER2 therapy in combination with chemotherapy for patients with advanced gastric cancer, but its antitumor efficacies are less favorable due to insufficient cell sensitivity and drug resistance 28,29 . The mechanisms underlying drug resistance are usually proposed to be amplification, upregulation, or mutation of HER2 and downstream MAPK and PI3K/AKT pathways 7,8 . Here, we elaborated a new mechanism that HER2 promotes tumorigenesis by direct regulation of cell mitotic progression through hyperactivating a Shc1-SHCBP1-PLK1-MISP axis, which drives the sensitivity of HER2-positive cells to trastuzumab. The Inhibitor TFBG can selectively block SHCBP1-PLK1 complex and render gastric cancer sensitive to trastuzumab (Fig. 8). Our findings facilitate a deeper understanding of HER2-evoked intracellular signaling networks and offer a promising therapeutic strategy for HER2-positive gastric cancer. HER2 employs a famous scaffold protein, Shc1, to amplify downstream signaling through multiple waves of protein interactions 21,30 . We screened out a Shc1-binding protein, SHCBP1, which was upregulated and associated with worse OS and clinicopathological characteristics in HER2-positive patients. Previous studies also demonstrated the oncogenic role of SHCBP1 in non-small cell lung carcinoma (NSCLC) and breast cancer. It was found that SHCBP1 was upregulated and correlated with poorer survival of patients with NSCLC and breast cancer 31,32 . Moreover, SHCBP1 was found strongly associated with the HER2 expression in breast cancer 32 . In here, we demonstrated a positive correlation between HER2 and SHCBP1 expression in gastric cancer, which suggested that the SHCBP1 may be involved in HER2-promoted tumorigenesis. We knocked down the SHCBP1 expression and found that SHCBP1 depletion inhibited the cell proliferation, and sensitized the cell to trastuzumab in gastric cancer cell SNU-216 and NCI-N87. Consistently, others also reported that SHCBP1 knockdown inhibited the proliferation and metastasis of gastric cancer cell MGC-803 and SGC-7901, and suppressed the proliferation and motility of esophageal squamous cell carcinoma ESCC cells 33,34 . These findings demonstrated the oncogenic role of SHCBP1 in a variety of human cancers. Previously, SHCBP1 was identified to play a role in cisplatin-induced apoptosis resistance in NSCLC, indicating the regulatory function of SHCBP1 in drug resistance to Fig. 4 Nuclear localization of SHCBP1 contributes to HER2-mediated cell proliferation. a Immunofluorescence (IF) staining of SHCBP1 throughout the cell division cycle in synchronized SNU-216 cells. Cells were co-stained with anti-SHCBP1 antibody (red), anti-α-tubulin antibody (green), and DAPI (blue). b IF staining of SHCBP1 nuclear localization in SNU-216 cells treated with 100 ng/mL epidermal growth factor (EGF) at the indicated times. Cells were costained with anti-SHCBP1 antibody (red), anti-ERBB2 antibody (green), and DAPI (blue). c Nuclear SHCBP1 positivity of SNU-216 cells following EGF treatment for different times. Eight fields containing at least 50 nuclei were counted in each treatment group. Data are the mean ± s.e.m. The p values were determined by one-way ANOVA (n = 8 independent biological samples). d Immunoblotting analysis of SHCBP1 in nuclear (Nuc) and cytosol extracts (Cyto) of EGF-treated SNU-216 cells. e, f Immunoblotting analysis (e) and IF statistical results (f) of nuclear SHCBP1 positivity in SNU-216 cells treated with EGF and/or trastuzumab (Trast). Data are the mean ± s.e.m. The p values were determined by one-way ANOVA (n = 8 independent biological samples). g Nuclear localization detection of Flag-tagged SHCBP1 deletion mutants stably expressed in SNU-216 cells using IF and immunoblotting. Cells were immunostained with anti-Flag antibody (green) for anti-SHCBP1 deletion mutants and DAPI (blue) for nuclei. WT wild type. h Identification of serine residue phosphorylation in SHCBP1. Flag-SHCBP1 was immunopurified and analyzed by SDS-PAGE and LC-MS/MS. i Lysates from SNU-216 cells stably expressing Flag-tagged SHCBP1 (WT) or mutant SHCBP1 (S273A) were immunopurified and analyzed by immunoblotting, using an anti-phosphoserine (p-Ser) antibody. IP immunoprecipitation, WCL whole cell lysates. j Nuclear localization detection of Flag-tagged SHCBP1 or mutant SHCBP1 (S273A), using IF and immunoblotting. Cells were immunostained with anti-Flag antibody (green) and DAPI (blue). k Sensitivity to trastuzumab in SHCBP1 knockdown cells reexpressing SHCBP1 or mutant SHCBP1 (S273A). Cells were treated with trastuzumab at the indicated concentrations for 6 days and subjected to a cell viability assay. Data are the mean ± s.e.m. The p values were determined by two-sided Student's t test (n = 5 independent biological samples). Data of a and b are representative of at least three independent experiments. Data of d, e, g, i, and j are representative of at least two independent experiments. cancers 35 . Notably, we showed that SHCBP1 functions as a regulator of HER2-directed gastric cancer therapy. SHCBP1 was found correlating with trastuzumab sensitivity in patients who received trastuzumab-based therapy, and SHCBP1 deletion rendered gastric cancer sensitive to trastuzumab, implying that SHCBP1 contributed to trastuzumab sensitivity and was targeted therapeutically. However, more clinical analysis of correlation between SHCBP1 and HER2-positive patients who received trastuzumab therapy have not been conducted in here due to the shortage of the patients. A large-scale clinical study is required in the future to validate the importance of SHCBP1 in trastuzumabbased therapy. Previously, SHCBP1 was reported as an Shc1-binding protein 21,36 . We showed that SHCBP1 was initially associated with Shc1, but displaced following EGF stimulation and that this was a downstream consequence of HER2 activation. Despite the detailed mechanism of how HER2 activation disrupts SHCBP1-Shc1 binding has not been uncovered in here, it has been reported that SHCBP1 interacted with the SH2 domain of Shc1, which is also recruited to phosphotyrosine motifs on ERBBs 36,37 . SHCBP1 and ERBBs bind competitively to the Shc1 SH2 domain, which is a reasonable explanation for HER2induced SHCBP1-Shc1 dissociation, but a systematic study is required in the future. It was reported that the silencing of SHCBP1 led to an increase of PTEN in lung cancer 38 . It is possible that SHCBP1 synergistically activates the PI3K/AKT signaling by inhibiting PTEN. Revealing the mechanism of the crosstalk between HER2/SHCBP1/PLK1 and PI3K/AKT or MAPK signaling in the future will be valuable for the understanding of SHCBP1 as HER2 downstream effector. Following the disconnection of SHCBP1 and Shc1, we demonstrated that SHCBP1 translocated into the nucleus to promote cell mitotic progression. Dramatic nuclear localization of SHCBP1 after EGF exposure was observed and abrogated by trastuzumab treatment. We also revealed that blocking SHCBP1 nuclear localization rendered gastric cancer cells sensitive to trastuzumab, demonstrating that SHCBP1 nuclear localization is a downstream consequence of HER2 activation. However, it is unknown how SHCBP1 translocates into the nucleus following dissociation from Shc1. We characterized that EGF-induced S273 phosphorylation of SHCBP1 was responsible for these effects. Interestingly, previous observations also indicated that EGF stimulation can cause SHCBP1 redistribution to the nucleus. SHCBP1 interacts with β-catenin for nuclear translocation to activate β-catenin signaling 31 . Consistent with this possibility, we speculate that phosphorylation at S273 causes the release of SHCBP1 from Shc1 and a delayed interaction with β-catenin, resulting in the nuclear translocation of SHCBP1. However, a detailed interrogation is needed for the explanation. Mitotic regulators always translocate into the nucleus for cell division before the nuclear membrane cracking 39 , and we found that SHCBP1 is an essential mitotic regulator involved in mitotic progression following its nuclear localization. SHCBP1 binds with PLK1 and forms a complex with MISP to regulate cell mitosis through colocalization at the spindle poles. PLK1 is an essential polo-like kinase and performs pleiotropic functions to control mitotic entry, centrosome separation, and cytokinesis 40,41 . MISP is a substrate of PLK1 and is phosphorylated by PLK1 to stabilize cortical and astral microtubule attachments required for proper mitotic spindle positioning 24,42 . We showed that nuclear SHCBP1 contributed to cell mitosis through binding with PLK1 to promote the phosphorylation of MISP, which was a downstream consequence of HER2 activation and involved in HER2mediated cell proliferation and trastuzumab sensitivity. We found that SHCBP1 localized to the spindle poles in metaphase and to the midbody during cytokinesis, suggesting that SHCBP1 plays pleiotropic roles in both metaphase and cytokinesis. Indeed, SHCBP1 was reported to be responsible for cell mitosis during cytokinesis, which was colocalized at the midbody with MgcRacGAP and MKLP1, where it forms centralspindlin and promotes the ingression of the cytokinetic furrow 22,23 . Our findings provide a deeper understanding of SHCBP1 as a mitotic protein employed in a plethora of mitotic processes. From a clinical perspective, our findings support that SHCBP1-PLK1-MISP signaling serves as a HER2 downstream signaling pathway to render gastric cancer sensitive to trastuzumab. To this end, we identified a natural product TFBG, as an inhibitor to block the SHCBP1-PLK1 interaction and sensitize cells to trastuzumab. TFBG is a polyphenolic compound extracted from black tea and has exhibited potent anticancer properties 43 . We showed that trastuzumab-TFBG combination therapy significantly delayed tumor growth and induced dramatic tumor regression by blocking activation of SHCBP1-PLK1-MISP signaling, demonstrating that TFBG is a promising sensitizing agent for combination with trastuzumab to battle HER2-positive gastric cancer. However, validation of the tumor inhibition using patient-derived xenografts mouse, and systematic toxicological testing of trastuzumab and TFBG combination is necessary in the future for the developing TFBG as drugs. It should be noted that TFBG is screened out based on the pocket of PLK1, which interacts with SHCBP1 to block SHCBP1-PLK1 interaction. It is possible that TFBG may inhibit the binding of PLK1 with other proteins. Previous studies revealed that TFBG also target to other signal pathways including EGFR 44 , which may partly be responsible for the sensitization effects of TFBG to trastuzumab in HER2-positive gastric cancer. In addition, some studies underlined the relatively low bioavailability and metabolic stability of TFBG 45 . TFBG is extensively metabolized to metabolites, including theaflavin, theaflavin-3-gallate, and gallic acid 46 . Thus, medicinal chemistry efforts aimed at improving the compound's pharmacological properties to the level required for drugs will be important. In conclusion, we propose that HER2 is involved in cell mitotic regulation by hyperactivating an intracellular SHCBP1-PLK1-MISP Fig. 5 SHCBP1 binds with PLK1 to promote MISP phosphorylation for mitotic progression. a Co-immunoprecipitation assays of Flag/HA-tagged SHCBP1, PLK1, or MISP co-expressed in HEK293T cells, which were synchronized to the mitotic phase using 40 ng/mL nocodazole (NOC). IP immunoprecipitation, WCL whole cell lysates. b Yeast two-hybrid interaction assays of the SHCBP1, PLK1, and MISP complex. SD-LW synthetic dropout Leu and Trp, SD-LWHA synthetic dropout Leu, Trp, His, and adenine. c Pulldown assays of the SHCBP1, PLK1, and MISP complex. 6His-GST-tagged SHCBP1 and HA-tagged PLK1 or MISP expressed in E. coli were pulled down and analyzed by Coomassie blue staining and immunoblotting. WCL whole cell lysates, PD pulldown. d Colocalization of SHCBP1, PLK1, and MISP in SNU-216 cells. Cells stably expressing GFP-tagged SHCBP1 (green) were immunostained with anti-PLK1 antibody (red), anti-MISP antibody (gray), and DAPI (blue). e Immunoblotting assays of MISP mobility shift in nocodazole (NOC) blocked SNU-216 cells with or without calf intestinal phosphatase (CIP) treatment and PLK1-overexpressed SNU-216 cells. f In vitro kinase assays of PLK1 on MISP detected by immunoblotting (left) and Coomassie blue staining (right). g Co-immunoprecipitation assays of Flag-MISP and HA-PLK1 in SNU-216 shCtrl or SHCBP1 knockdown cells. h Detection of overexpressed PLK1-induced MISP mobility shift in SNU-216 shCtrl or SHCBP1 knockdown cells, using immunoblotting. i Detection of MISP mobility shift in SHCBP1 knockdown cells reexpressing SHCBP1 at the indicated concentrations using immunoblotting. j Detection of MISP mobility shift in cells transfected with Flag-tagged MISP (WT) or mutant MISP (3A: S394A, S395A, and S397A) by immunoblotting using anti-Flag antibody. k Sensitivity to trastuzumab in SNU-216 shCtrl cells or MISP knockdown cells. Each group was treated with trastuzumab for 6 days and subjected to a cell viability assay. Data are the mean ± s.e.m. The p values were determined by one-way ANOVA (n = 6 independent biological samples). l Sensitivity to trastuzumab in MISP knockdown SNU-216 cells reexpressing MISP (WT) or mutant MISP (3A: S394A, S395A, and S397A). Cell was treated with trastuzumab for 6 days and subjected to a cell viability assay. WT wild type. Data are the mean ± s.e.m. The p values were determined by two-sided Student's t test (n = 6 independent biological samples). Data of a, c, e-j are representative of two independent experiments. Data of b and d are representative of three independent experiments. signaling to mediate cell proliferation and drug sensitivity. HER2 activation leads to the recruitment of Shc1 to activate the MAPK and PI3K/AKT pathways and in turn releases SHCBP1, which translocates to the nucleus for mitotic progression. The SHCBP1-PLK1-MISP axis is an essential intracellular cascade of HER2 activation in addition to the canonical downstream signaling pathways. Blocking SHCBP1-PLK1 complex using targeted TFBG agent renders gastric cancer sensitive to HER2-directed therapy, providing additional combination therapy in HER2-positive patients with advanced gastric cancer. Methods Human gastric cancer clinical specimens. All specimens were acquired from patients under the auspices of clinical protocols approved by the Medical Ethics Review Board at the Lanzhou University Second Hospital. Informed consent was obtained from all participants. A total of 223 paired gastric cancer and adjacent normal tissues were constructed to TMAs after formalin-fixed and paraffinembedded blocks, and prepared for immunohistochemical staining. A total of 16 paired gastric cancer and adjacent normal tissues were used for microarray assay and 8 paired tissues were used for immunoblotting. A total of 22 specimens of HER2-positive gastric cancer patients who received trastuzumab-based therapy were used for immunohistochemical staining to demonstrate the correlation of SHCBP1 with trastuzumab sensitivity (Supplementary Table 2). A consent to publish clinical information reported in Supplementary Fig. 2 and potentially identifying individuals was obtained. Cell lines, cell culture, and lentiviral production. Human gastric cancer cells NCI-N87, HGC-27, MKN-45, and AGS were obtained from the institute of Basic Medical Sciences, Chinese Academy of Medical Sciences (Beijing, China). Human gastric cancer cells KATO-II, and HS-746T were obtained from the Kunming institute of zoology, Chinese Academy of Sciences (Kunming, China). HEK293T cells were obtained from the ATCC, and human gastric cancer cells SNU-216 were obtained from the Korean Cell Line Bank. All cell lines have been tested for mycoplasma contamination and were validated by short tandem repeat DNA fingerprinting, using the commercially available EX20 Kit from AGCU. NCI-N87 and SNU-216 cells were grown in RPMI medium supplemented with 10% fetal bovine serum (FBS) and other cell lines were cultured in DMEM/F12 medium supplemented with 10% FBS. All cell lines were incubated at 37°C with 5% CO 2 under humidifying conditions. To generate SNU-216 stable cell lines expressing Flag-tagged Shc1, SHCBP1, or MISP, HEK293T cells were used to package virus. Pseudotyped virus was produced by co-transfecting 1 μg LentiCMV-FLAG-Shc1, SHCBP1, or MISP, 1 μg pDD and 0.5 μg pVSVg in a 3.5 cm dish of 293T cells. Cell culture supernatants containing recombinant lentiviruses were collected 48 h after transfection and filtered through non-pyrogenic filters with a pore size of 0.45 mm. Supernatants were used at a dilution of 1:2 immediately to transduce SNU-216 cells in the presence of polybrene (10 mg/mL). Transduced cells were selected with 2 μg/mL puromycin for 5-7 days to generate SNU-216 stable lines expressing Flag-tagged Shc1, SHCBP1, or MISP. Plasmids and RNA interference. All the plasmids used in this paper were generated using the Gibson assembly cloning method. The cDNA sequences of Shc1, SHCBP1, PLK1, and MISP gene (Genechem, China) were delivered into pLen-tiCMV-1 × Flag-puro vector (gift from Hui Sun Lab) for conditional expression, into pRK5-vector (gift from Hui Sun Lab) for temporary expression. The mutant constructs of SHCBP1 (S273A and S273D), MISP (S394A, S395A and S397A), and PLK1 (K474S, Y485F, H489N, L490A, and Q501D) were generated by site-directed mutagenesis and cloned into pLentiCMV-1 × Flag-puro vector or pRK5-vector. The Shc1 and PLK1 sequences were cloned into pECFP-N1 and SHCBP1 to pEYFP-N1 vector (Addgene) for FRET. The SHCBP1, MISP, and PLK1 were cloned into pGBKT7 and pACT2 vector (Addgene) for yeast two-hybrid assay, and cloned into pETDuetM-6HIS-GST-pp (gift from Sanduo Zheng Lab) for GST pulldown assay. All constructs were verified by full-length sequencing. Lentiviral vectors encoding human SHCBP1, MISP, and scrambled control shRNA were obtained from Genechem company. The lentiviral vector containing shRNA inserts was packaged by co-transfection into HEK293T with pSPAX2 and pMD2G. SNU-216 and NCI-N87 cells were infected with lentivirus and selected with puromycin. Knockdown of SHCBP1 or MISP expression were validated by quantitative RT-PCR and immunoblotting. The information of shRNA sequences was shown in Supplementary Table 3. Immunohistochemistry. TMAs of human gastric cancer specimens were deparaffinized and rehydrated, followed by antigen retrieval. After anti-SHCBP1 (Sigma, diluted at 1:200) or anti-ERBB2 (Abcam, diluted at 1:200) antibody incubation, the slides were dehydrated and stabilized with mounting medium and the images were acquired with a KF-PRO-120 scanner (Konfoong, China). Staining intensity (0, 1, 2, and 3) and percentage of positive cells among cancer duct (0-100%) were evaluated by a pathologist at the Lanzhou University Second Hospital. The final histoscore (H-score) were calculated by multiplying the staining intensity and the percentage of positive cells. The H-score < 70 is divided as SHCBP1 low expression group and H-score ≥ 70 as SHCBP1 high expression group, according to the median value of SHCBP1 H-score. Confocal microscopy analysis. TMAs or cells cultured on 35 mm glass-bottomed Microwell Dishes (MatTek Corporation) were fixed with 4% paraformaldehyde for 10 min and subjected to permeabilization with 0.1% TritonX-100. After that, cells were incubated with 3% bovine serum for 1 h at room temperature and then with primary antibodies overnight at 4°C, washed thrice in PBS and further incubated with the appropriate fluorescent-labeled secondary antibodies. Nuclei were counterstained with 4, 6-diamidino-2-phenylindole (DAPI) before mounting. Confocal fluorescence images were captured using a Zeiss LSM 880 laser microscope (×63 oil objective, Plan-Apochrom 1.4). The positivity of SHCBP1 in the nucleus was detected by SHCBP1 IF and was quantified using Image J V1.53c software. The mean fluorescence intensity (MFI) of SHCBP1 in the nucleus was quantified and the MFI > 10 was identified as the positive cells. Time-lapse and flow cytometry analysis. For the time-lapse analysis, the shCtrl and SHCBP1 knockdown cells were cultured on glass-bottom dishes. Starting 24 h later, the cells were monitored using a time-lapse microscope system (Operetta CLS, PerkinElmer) for 24 h and the images were captured every 20 min. For cell cycle detection, the shCtrl and SHCBP1 knockdown cells were collected and fixed with precooled 75% ethanol overnight at 4°C. The cells were submersed in 37°C water for 30 min, followed by washing with PBS buffer twice and blocked in PI/ RNase staining buffer (BD) for 15 min at room temperature before analysis. The DNA content of the cell cycle was measured by flow cytometer (CytoFLEX, Beckman Coulter). Co-immunoprecipitation assay. Cells seeded in 10 cm dishes were lysed with lysis buffer [50 mM Tris-HCl (pH = 7.4), 150 mM NaCl, 1 mM EDTA, and 0.2% Tri-tonX-100] containing protease inhibitor. After 10 min on ice, lysates were centrifuged at 13,800 × g for 10 min at 4°C. The supernatants were incubated with 20 μL prewashed anti-Flag M2 affinity gel agarose (50% slurry, Sigma) for 2 h on a rotary shaker at 4°C. Immunoprecipitates were collected by centrifugation at 1500 × g for 2 min at 4°C, washed thrice with 1 mL of cold lysis buffer, and eluted Fig. 6 Natural product theaflavine-3, 3′-digallate (TFBG) blocks the SHCBP1-PLK1 interaction. a Interactions between PLK1 and different SHCBP1 deletion mutants analyzed by co-immunoprecipitation assays. WT wild type, IP immunoprecipitation, WCL whole cell lysates. b Interactions between SHCBP1 and the kinase domain (KD) or polo-box domain (PBD) of PLK1 analyzed by co-immunoprecipitation assays. c The bound conformation of the SHCBP1 355-562 aa and PLK1 PBD as predicted by the ZDOCK algorithm. The PBD is displayed in gray, and the SHCBP1 355-562 aa domain is displayed in red. d The protein complex of PBD and SHCBP1 355-562 aa displayed in ribbon mode from the same angle view as in c. The PBD residue bonds on the binding surface are marked in blue. e Interactions between SHCBP1 and different PLK1 deletion mutants containing residues on the binding surface of the mode analyzed by co-immunoprecipitation assays. f Interactions between SHCBP1 and different PLK1 mutants of four core amino acids on the binding surface of the mode analyzed by co-immunoprecipitation assays. g Computational model and interactions of TFBG and the SHCBP1-PLK1 complex (top left and right). Overall structure of the complex showing the binding of TFBG to PLK1 and SHCBP1, which are in green, gray, and red (bottom left and right). Architecture of PLK1 and TFBG showing interacting amino acids of PLK1 (gray) and TFBG (green). h Kinetic constant (K d ) analyses of TFBG interacting with PLK1 using surface plasmon resonance (SPR) assays. i, j Affinity purification and mass spectrometry analysis. SNU-216 stable lines expressing Flag-epitope-tagged Shc1, SHCBP1, or MISP were seeded in 10 cm dishes and treated as indicated in the figure legends. Cells were lysed with lysis buffer [50 mM Tris-HCl (pH = 7.4), 150 mM NaCl, 1 mM EDTA, and 0.2% Tri-tonX-100] containing protease inhibitor. Lysates were centrifuged at 13,800 × g for 10 min at 4°C. The supernatants were incubated with 50 μL prewashed anti-Flag M2 affinity gel agarose (50% slurry, Sigma) for 2 h on a rotary shaker at 4°C. Immunoprecipitates were collected by centrifugation at 1500 × g for 2 min at 4°C, washed thrice with 1 mL of cold lysis buffer, and eluted by adding 0.1 M glycine HCl (pH = 3.5). To identify interacting proteins of Shc1 or SHCBP1, the immunecomplex samples were subjected to in-gel trypsin digestion overnight, and the extracted peptides subjected to LC-MS/MS analysis. To identify the phosphorylation sites of SHCBP1 and MISP, samples were run on SDS-PAGE, and the position of the differentially phosphorylated bands was determined by conducting western blot analysis on an aliquot of the same samples. Bands were then excised and subjected to LC-MS/MS analysis. Yeast two-hybrid assay. The yeast two-hybrid assay was conducted by the Matchmaker Yeast Two-Hybrid System (Clontech), according to the manufacturer's instruction. In brief, SHCBP1 or MISP were cloned into the pGBKT7 vector downstream of a Gal4 DNA-binding domain to construct a bait plasmid, and SHCBP1 or PLK1 were cloned into the pACT2 vector downstream of a Gal4 DNA-activation domain to construct a prey plasmid. The bait and prey plasmids were transformed into Y187 yeast and the resulting diploids were grown on -Leu/-Trp, and subsequently assayed for the activation of reporter genes [growth on quadruple dropout medium (-Ade/-His/-Leu/-Trp)]. GST pulldown assay. SHCBP1, PLK1, and MISP gene were cloned into the pETDuetM-6HIS-GST-pp vector. GST-tagged SHCBP1, His-tagged PLK1, and His-tagged MISP proteins were expressed in the Escherichia coli strain BL21 (DE3) strain and GST pulldown assay was conducted, using a pierce GST protein interaction pulldown kit (Thermo Scientific). Input and eluate samples were resolved using SDS-PAGE and analyzed by Coomassie blue staining or immunoblotting. Cell viability assay and colony formation assay. In growth factor free condition, HER2 is inactivated and SHCBP1 acts as a Shc1-binding protein interacting with Shc1. Following growth factor stimuli, HER2 is activated through dimerization with other HER members and recruits Shc1 to evoke MAPK or PI3K pathways. Released SHCBP1 responds to HER2 cascade by translocating into the nucleus following Ser273 phosphorylation (pS273), and then contributing to cell mitosis regulation through binding with PLK1 to promote the phosphorylation of the mitotic interactor MISP. Hyperactivation of HER2-SHCBP1-PLK1 axis impairs the sensitivity of HER2-targeted therapy trastuzumab. The Inhibitor theaflavine-3, 3′-digallate (TFBG) blocks SHCBP1-PLK1 complex and renders gastric cancer sensitive to trastuzumab. TFBG-trastuzumab combination is highly efficacious to suppress gastric cancer growth. RT-PCR. Total RNA was extracted from harvested cells with Trizol reagent, and reversely transcripted into cDNA with the reverse transcription kit, according to manufacturer's protocols. Quantitative RT-PCR amplification and product detection were conducted on a LightCycler instrument (Roche) by using SYBR Green dye (Takara) and 10 μM forward and reverse primers. The information of primer sequences used for qRT-PCR were revealed in Supplementary Table 4. Fluorescence resonance energy transfer assay. FRET measurement was performed using the acceptor-photobleaching method. For PLK1-SHCBP1 interaction analysis, cells were transfected with eYFP-SHCBP1 and eCFP-PLK1. For Shc1-SHCBP1-binding detection, cells were transfected with eYFP-SHCBP1 and eCFP-Shc1. Cells were seeded on 35 mm glass-bottomed microwell dishes (MatTek Corporation) and imaging analyses were carried out using a Zeiss LSM 880 laser microscope, a ×63 oil objective (Plan-Apochrom 1.4). Intensity of eYFP and eCFP was collected using appropriate bandpass filter (eCFP: Ex 440 nm, Em 563-520 nm; eYFP: Ex 514 nm, Em 520-620 nm). eYFP was bleached with intense 514 nm laser and average intensity of eCFP in a region of interest spanning the bleached cell was determined in the images before acceptor bleach (I1) and after acceptor bleach (I2). FRET efficiency is reported as [1 − (I1/I2)]%. Calculations were based on more than 100 cell images. Xenograft studies. Animals were manipulated and housed according to protocols approved by the Animal Ethics Committee of Lanzhou University Second Hospital. All animals received humane care according to the criteria outlined in the "Guide for the Care and Use of Laboratory Animals" prepared by the National Academy of Sciences and published by the National Institutes of Health. NCI-N87/luc tumor xenograft models were established in athymic nude mice. Tumors were imaged by an in vivo Imaging System (Vieworks, Smart-LF, Korean) on the days shown. To evaluate SHCBP1 knockdown involving in the gastric cancer sensitivity to trastuzumab, female BALB/c nude mice (6-8 weeks of age) received single subcutaneous flank injection of 1 × 10 6 Lenti-vector or Lenti-shRNA targeting NCI-N87/luc cells in a 1:1 PBS:Matrigel suspension (BD Biosystems, San Jose, CA). When tumor volumes reached 100 mm 3 , trastuzumab was given at 10 mg/kg intraperitoneally twice weekly. Tumors were measured by caliper three times per week. Animals were sacrificed and immunohistochemical staining was performed in the tumors after drug treatment for 21 days. To determine the synergistic effects of combining trastuzumab with TFBG, NCI-N87/luc cells (1 × 10 6 cells per site) in suspension were mixed with equal volumes of Matrigel and injected subcutaneously into 6-8-week old female BALB/c nude mice. When tumor volumes reached 100 mm 3 , preestablished NCI-N87 tumor xenografts were treated with vehicle, trastuzumab (intraperitoneal, 10 mg/kg, 2×/wk × 3), TFBG (subcutaneous, 2.5 mg/kg or intraperitoneal, 50 mg/kg, 1×/day × 21), or a combination of the agents. At the end of experiments, animals were sacrificed and subcutaneous tumors were harvested. Tumor proliferation was assessed by Ki-67 immunohistochemical staining. Protein-protein docking. The binding surface of the PLK1 PBD domain (367-603 aa) and SHCBP1 355-562 aa domain was predicated by the Dock Proteins (ZDOCK) protocol 25 . Briefly, we employed the I-TASSER server 27 to predict the 3D structure of SHCBP1 355-562 aa domain, and then the SHCBP1 355-562 aa domain and PLK1 PBD domain (PBD ID code 1UMW) were respectively defined as receptor and ligand proteins, and no amino acid was predefined as interface residue or paired interacting residue. The docking model with highest ZDOCK score was selected for further experimental validation. Surface plasmon resonance screening. The SPR screening of inhibitors targeting SHCBP1-PLK1 was conducted using Biacore T200 system (GE Healthcare). All experiments were carried out with HBS-EP (10 mM HEPES pH 7.4, 150 mM NaCl, 3.4 mM EDTA, and 0.005% surfactant P20) as running buffer with a constant flow rate of 10 μL/min at 25°C. PLK1, which was diluted with 10 mmol/L sodium acetate buffer (pH 5.0) to a final concentration of 10 μM, was immobilized on a CM5 Sensor Chip surface via covalent linkage to the N terminus of PLK1. Bindings of 40 compounds with PLK1 were performed by passing the molecules (100 μM) through the immobilized PLK1 at the flow rate of 10 µL/min. The association and dissociation time were 120 s, respectively. The compounds which response units >16 RU were screened and the kinetic analyses were performed. For kinetic analyses of the TFBG-PLK1 binding, TFBG was dissolved in the running buffer at different concentrations ranging from 0.78 to 12.5 μM and the kinetic analyses were performed based on the steady-state affinity fit model, according to the procedures described in the software manual. Microscale thermophoresis assays. MST experiments were conducted on a Monolith NT.115 system (NanoTemper Technologies GmbH, Germany), which were used to quantify the interaction between PLK1 and TFBG. PLK1 was labeled with the manufacturer's labeling kits (Monolith TM His-Tag Labeling Kit, Nano-Temper Technologies GmbH, Germany). The PLK1 solutions were prepared in 10 mM PBS (pH 7.4), and TFBG solutions were prepared in 10 mM PBS (5% v/v DMSO, pH 7.4). The concentration of PLK1 is 50 nM, TFBG is titrated from 1.5 × 10 −10 to 5.0 × 10 −6 M, the mixed solution of PLK1 and TFBG containing 0.05% v/ v Tween 20. The samples were added to the monolith capillaries (MO L022, NanoTemper Technologies) and subsequently subjected to MST analysis. The dissociation constant was determined using a single-site model to fit the curve. Microarray analysis. Total RNA was extracted from 16 cancer and corresponding noncancerous samples from advanced gastric cancer patients, and quality of RNA samples was determined using NanoDrop 2000 and Agilent Bioanalyzer 2100. Samples with RINs > 7.0 were used and mRNA microarray analyses using Affymetrix GeneChip primeview human (Affymetrix, Santa Clara, CA) were performed, according to manufacturer's instructions. Fluorescent signals were scanned using an Affymetrix GeneChip Scanner 3000. Statistical analysis. Statistical analyses were performed using SPSS version 25.0 and GraphPad Prism version 8.0. All data were analyzed for normality using Kolmogorov-Smirnov normality test or Shapiro-Wilk normality test. In terms of data with normality, two-sided Student's t test was used for two groups and oneway ANOVA analysis was performed for multiple groups, followed by the post hoc LSD method (homogeneity of variance) or Tamhane method (heterogeneity of variance). For non-normally distributed values, nonparametric tests were applied. Spearman correlation analysis was used to examine the relativity of gene expression between HER2 and SHCBP1. Kaplan-Meier analysis and log-rank (Mantel-Cox) test were progressed for survival data. Univariate and multivariate analyses were performed using Cox proportional hazards regression models. Statistical significance was considered at p values <0.05.
2023-02-09T14:38:40.925Z
2021-05-14T00:00:00.000
{ "year": 2021, "sha1": "36450d4763f9233600d8cb15a886c87f8f1e907a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-021-23053-8.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "36450d4763f9233600d8cb15a886c87f8f1e907a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
244264024
pes2o/s2orc
v3-fos-license
Assessing the Effectiveness of Natural Coating Application in Prolonging Shelf-Life in Plumcot Fruits : This study was carried out to assess the morphological characteristics, fruit quality, and antioxidant levels in sucrose ester-coated ‘Harmony’ plumcots ( Prunus salicina Lindl. × P. armeniaca L.). Fruit samples in the control group were left untreated, with two further groups undergoing coating either after 0 days of cold storage (0 d CS) or after 7 days of cold storage (7 d CS) to evaluate changes in post-harvest quality at three-day intervals throughout 12 days of room temperature storage (12 DAS). Coating treatment significantly reduced fruit respiration during storage time in the 0 d CS samples, with this being attributed to the clogging of pores in peel stomata and lenticel, as observed on the fruits under scanning electron microscopy; however, the same effect was not observed in the 7 d CS samples from fruits with a high initial CO 2 concentration. The coating delayed fruit softening and discoloration during storage in the 0 d CS samples, extending the shelf-life of the fruits for approximately 9 days. However, the coating treatment was found to reduce total flavonoid and anthocyanin content at 6 DAS and 12 DAS in both groups. Introduction Plumcot (Prunus salicina Lindl. × P. armeniaca L.) trees are likely a naturally interspecific hybrid originating between Japanese plum (Prunus salicina Lindl.) and apricot (P. armeniaca L.) trees a long time ago [1][2][3][4][5][6]. There has been an increasing consumer demand for plumcot fruit due to their high bioactive compound content, total flavonoid content, total phenol content, abundance of vitamins A and C, and abundance of mineral nutrients considered important to human health and prevention of disease [5,7]. New plumcot cultivars have been actively selected, from 'Harmony' varieties in 2007, 'Tiffany' in 2010, and 'Symphony' in 2011, by The National Horticultural Research Institute of South Korea. However, the post-harvest handling and storage of plumcot fruit has proven difficult for farmers due to its typical climacteric type as a prunus fruit species [2,3,5]. 'Harmony' plumcots are widely cultivated in S. Korea and typically harvested at 30-50% red skin coloration in July [3,8]. The fruits soften and decay rapidly at room temperature within two to five days due to accelerating internal concentrations of ethylene and CO 2 [3,4,[8][9][10][11][12][13][14][15]; however, 1-methylcyclopropene has been effective in controlling fruit ripening and the resulting softening, skin color change, and acidity in 'Harmony' fruits [8], but is not seen as a desirable additive by consumers, with additive-free food in increasing global demand [12]. Cold storage for several days at temperatures lower than 8 • C has been seen to increase the risk of chilling injury in stone fruits, resulting in internal browning, breakdown, and discoloration of the flesh [16], with fruits requiring novel post-harvest care at room temperature to prevent post-harvest loss. This study was carried out to investigate fruit quality in coated 'Harmony' plumcots by examining the plants' morphological characteristics, antioxidant levels, and susceptibility to physiological disorders, as well as examining sucrose ester application during storage to effectively slow the fruit ripening process. Fruit Collection Approximately 500 'Harmony' plumcot samples were collected at harvest with a 30% to 50% red skin coloration from a packing house in a commercial orchard located in Yeongcheon, South Korea, on 30 June 2021 ( Figure 1A,B). The fruit samples were immediately transferred to a laboratory at an agricultural experiment station at Daegu Catholic University in Gyeongsan, South Korea. Two methods of applying coating were adopted and evaluated for their effectiveness in prolonging post-harvest fruit quality after room temperature (25.0 ± 0.5 • C/60 ± 5% relative humidity (RH)) storage for 12 days, using two fruit sample groups; one group was from fruit immediately transferred from the packing house (after 0 d CS), while the other was kept for seven days in commercial cold storage (5.0 ± 0.5 • C/80 ± 5% RH; after 7 d CS) and then coated. Coating Treatment Fruit samples collected after 0 d CS and after 7 d CS were dipped into water as a control (uncoated control) and into a solution of 2.0% edible additives based on a mixture of sucrose monoesters of fatty acid (Naturcover Conservation Extra, Decco, Inc., Valencia, Spain) for 5 min as a coating, and then air-dried according to the method of Jung and Choi [25]. The coating material consisted of 15.0% (w/v) ethanol, 5.0% (w/v) alpha-D-Glucopyranoside, beta-D-fructofuranosyl, mixed palmitates and stearates, and 2.0% water (w/v) [25]. Peel samples from control and coated fruits were taken for observation of their morphological structure at 6 DAS according to the method of Jung and Choi [25]. The epicuticular structure (5 × 5 × 2 mm) from a 2 mm thick section of peel was analyzed under scanning electron microscopy (SEM; SU-3500, Hitachi Co., Ltd., Tokyo, Japan), with a voltage of 10 kV and at 50, 100, and 500× zoom. Fruit Quality Parameters An analysis of all quality parameters was conducted using five fruits from the control and coated groups at each sampling time at three-day intervals throughout 12 days of room temperature storage (12 DAS) typical of when fruit is exhibited to consumers in supermarkets. This was done for both groups, those treated with coating after 0 d CS, and for those coated after 7 d CS, using 300 fruit samples from each group. All fruits were kept without packaging during the storage. Each fruit was kept in a 3000-mL plastic film bag containing a digital CO 2 electronic sensor capable of measuring between 0-1000 mL −1 kg −1 h −1 (XE-2000 Multi-function, XEAST Co., Ltd., Shenzhen, China) that was placed inside for one hour to monitor concentrations of CO 2 inside the fruit tissue [25]. Fresh fruit weight was measured with an electronic balance (EB-430HU, Shimadzu Co., Ltd., Tokyo, Japan). The weight loss percentage was calculated using the following equation: weight loss (%) = ((initial fruit weight-weight at sampling date)/initial fruit weight) × 100. Three sections from the middle points of the fruit were thinly peeled to determine flesh firmness using a hand-held penetrometer mounted on a test stand with 8-mm diameter cylindrical tip (FR-5105, Lutron electronic enterprise Co., Ltd., Taipei, Taiwan). The fruits were freshly squeezed using a cloth to prepare total soluble solids (TSS) at 100-fold dilution and acidity observed using a Brix-acidity meter (GMK-706R, G-WON Hitech Co., Ltd., Seoul, Korea). The TSS/acid ratio, the ratio between sugars and acids in the plumcot and an important indication of fruit quality, was determined by dividing TSS and acidity values. The sensory quality of each fruit was evaluated by four semi-trained persons on a hedonic scale (1-5 scale from worst to best) for sweetness, sourness, and fruit overall acceptability according to Novianti et al. [29]. Fruit peel color was measured at three points on the equatorial region of the peel and expressed in terms of parameters L*, a*, and b* using a digital colorimeter with its 8 mm measuring aperture color analyzer (FR-5105, X-Rite, Inc., Grand Rapids, MI, USA). The parameter L* represents lightness, ranging between values 0-100 (black-white, respectively). Positive values of a* (−a* = greenness) and b* (−b* = blueness) indicate reddish and yellowish colors, respectively. Total Flavonoid and Anthocyanins Total flavonoid content was measured through colorimetric assay based on the procedure described by Chang et al. [34]. Then, 5g of fresh fruit tissue was dissolved in 20 mL of a solution of 80% ethanol and then centrifuged at 3000× g for 20 min. The supernatant was transferred into a 10 mL volumetric flask containing 4 mL of distilled water and 0.4 mL of 5% NaNO 2 for 5 min, which was then added to 0.4 mL of 10% AlCl 3 for 6 min and additionally to 2 mL of 4% NaOH, which was made up to a final volume of 10 mL with distilled water. The solution was measured colorimetrically at a wavelength of 510 nm using a UV-visible spectrophotometer (UV-1800 spectrophotometer, Shimadzu Scientific Instruments, Inc., Kyoto, Japan) to determine total flavonoid content. The total anthocyanin content of the flesh tissue was measured using a modified pH differential method [35]. Then, 1.0 g of fresh tissue was added to 10 mL of 80% methanol solution with 0.1% HCl at 150 rpm for 2 h before being centrifuged at 3000× g for 20 minutes. The resulting 3.0 mL of supernatant was then divided between 5 mL of two different buffers, 0.025 M potassium chloride at pH 1.0 and 0.4 M sodium acetate at pH 4.5, for 30 min of incubation. The extract was colorimetrically analyzed at an absorbance wavelength of 510 nm using a UV-visible spectrophotometer (UV-1800 spectrophotometer, Shimadzu Scientific Instruments, Inc., Kyoto, Japan). Statistical Analysis All statistical data analyses were conducted using Minitab software v. 15.1 (Minitab, Inc., State College, PA, USA). One-way analysis of variance was used to determine treatment effect, followed by Duncan's multiple range test on all the main effect means at p < 0.05. Data over time were shown as means ± standard errors. Fruit Morphological Characteristics and Respiration A porous surface was observed in the microstructure of the control fruits at 14 DAS via SEM (Figure 2A,C,E). The coated fragments of sucrose monoester did not completely retain cohesive films for lenticel and stomata on the epidermal cells ( Figure 2B,D,F). This lack of adhesion was most likely due to an uneven application of the coating due to the hairy skin of the fruit, along with strong climacteric behavior, and substantial physical changes from fruit shrivel, which consequently reduced adhesion of the fragments at 14 days after storage [18][19][20]. The sucrose monoester with palmitate had previously been observed to be effective in studies on the smooth peel of bananas and apples over 30 days, as it uniformly covered the surface with small fragments [19,25,27]. CO 2 concentrations in all fruits increased at 3 DAS and decreased at 9 DAS after 0 d CS ( Figure 3A) and at 12 DAS after 7 d CS ( Figure 3B) in a typical climacteric phase previously observed in 'Harmony' plumcot stored at 10 • C and 20 • C [8]. This ripening of the stone fruit species is associated with a rapid increase in ethylene production, resulting in the development of skin coloration, flavor changes, and softening, shortening fruit shelf-life [20,22,33]. Coating treatment significantly reduced fruit CO 2 concentrations at room temperature after 0 d CS because it would have formed a barrier to CO 2 and O 2 exchange between the plumcot fruit and the air, as previously reported in studies of edible coated fruits [20,22,30,33]. However, coating after 7 d CS was not considerably effective at decreasing fruit CO 2 concentrations from 6 DAS to 12 DAS. This may have been caused by the acceleration of the ripening process while experiencing mild cold stress at a temperature below 8 • C for 7 days [16]. Fruit Quality Flesh firmness is the prime factor in determining consumer acceptance and freshness of stone fruits [12,36]. The coated fruits retained significantly higher flesh firmness than those of control fruits from 3 DAS to 12 DAS after 0 d CS ( Figure 4A) by limiting oxygen levels that degrade enzymes in cell walls of the stone fruits [12,33], previously reported for studies with apple, banana, and pear fruits coated with sucrose esters [19][20][21][22]25,27]. However, the coating did not maintain the fruit firmness after 7 d CS from 6 DAS to 12 DAS (Figure 4B), mostly due to the elevated respiration rate at 0 DAS and the delayed treatment of the coating. Flesh weight loss did not significantly differ between the treated fruits during storage either after 0 d CS ( Figure 4C) or after 7 d CS ( Figure 4D), although the edible coating has typically reduced weight loss in many other stone fruits through blocking fruit surface pores and reducing transpiration [10][11][12][13][14][15]. Flesh weight loss in all fruits rapidly increased between 26 and 32% at 12 DAS and promoted active respiration and transpiration, consequently reducing the treatment's effect during storage time, previously addressed in the literature [10][11][12][13][14][15]. (panels A,B) and weight loss (panels C,D) in control (uncoated) 'Harmony' plumcot fruits and coated fruits with coating applied either 0 days after cold storage (After 0 d CS) or 7 days after cold storage (After 7 d CS), respectively, at 0, 3, 6, 9, and 12 days of room temperature storage (DAS). Bars represent error of the means, when larger than the dimension of the symbol. ns, *, and *** indicate nonsignificant and significant differences between control and coating treatments at p < 0.05 and p < 0.001, respectively. TSS, the components of soluble sugars and organic acids in the fruit juice, was generally higher in the control fruits at later storage periods after 0 d CS ( Figure 5A) and after 7 d CS ( Figure 5B) than in coated fruits. Low oxygen concentrations in coated fruits delayed starch degradation into TSS and inhibited Krebs cycle activity related to titratable acidity [17,18] which was observed as higher acidity in coated fruits at 3 DAS and 6 DAS after 0 d CS ( Figure 5C) but were not confirmed in after 7 d CS samples ( Figure 5D). TSS:acidity ratio increased in the control fruits at 3, 6, and 12 DAS in after 0 d CS samples ( Figure 5E) and at 9 DAS in after 7 d CS ( Figure 5F) with higher values more likely to decrease shelf-life of the different plum cultivars on the previously published research [12,15,36]. Control fruits were sweeter and less sour than those of coated fruits both after 0 d CS ( Figure 6A,C) and after 7 d CS ( Figure 6B,D), improving overall taste ( Figure 6E,F). This is attributed to organic acids acting as a respiratory substrate causing rapid ripening [17,18,36], while reducing sourness for later storage periods. This seasonal sensory evaluation is helpful in assessing sweetness or sourness of fruits and edible fruit quality during storage [37]. Neither TSS or the SSC/TA ratio are highly recommended for assessing peach fruit quality and consumer preference as the fruit firmness and water content could be partially affected by environmental factors [29,36,37]. Volatiles added to coating substrates have led to to decreased rates of microbial growth and degradation in strawberry and stone fruits, preventing fruit softening [12,31], but has not been confirmed effective on coated plumcot fruits and their more rapid ripening process ( Figure 6G,H). ; panels A,B), acidity (panels C,D), and TSS/acidity ratio (panels E,F) in control (uncoated) 'Harmony' plumcot fruits and in coated fruits with coating applied either 0 days after cold storage (After 0 d CS) or 7 days after cold storage (After 7 d CS), respectively, at 0, 3, 6, 9, and 12 days of room temperature storage (DAS). Bars represent error of the means, when larger than the dimension of the symbol. ns, *, **, and *** indicate nonsignificant and significant differences between control and coating treatments at p < 0.05, p < 0.01, and p < 0.001, respectively. The values of L* and b* on the peel decreased in all fruits during the storage time, with increased values observed for a* (Figures 7A-F and 8A,B). Coating treatment maintained the bright green color of the fruit surface after 0 d CS by retarding fruit senescence, as previously shown in 'Harmony' plumcots [8]. However, coating did not affect fruit surface color during storage in after 7 d CS, presumably due to coating proving ineffective at preventing the activity of anthocyanin synthesis [12,33]. (panels A,B), sourness (panels C,D), overall quality (panels E,F), and decay (panels G,H), in control (uncoated) 'Harmony' plumcot fruits and in coated fruits with coating applied either 0 days after cold storage (After 0 d CS) or 7 days after cold storage (After 7 d CS), respectively, at 0, 3, 6, 9, and 12 days of room temperature storage (DAS). Bars represent error of the means, when larger than the dimension of the symbol. ns, *, **, and *** indicate nonsignificant and significant differences between control and coating treatments at p < 0.05, p < 0.01, and p < 0.001, respectively. (panels A,B), a* (panels C,D), and b* (panels E,F) values of flesh tissue in control (uncoated) 'Harmony' plumcot fruits and in coated fruits with coating applied either 0 days after cold storage (After 0 d CS) or 7 days after cold storage (After 7 d CS), respectively, at 0, 3, 6, 9, and 12 days of room temperature storage (DAS). Bars represent error of the means, when larger than the dimension of the symbol. ns, *, **, and *** indicate nonsignificant and significant differences between control and coating treatments at p < 0.05, p < 0.01, and p < 0.001, respectively. Total Flavonoid and Anthocyanins The coating treatment reduced total flavonoid and anthocyanin content in fruit tissue at 6 DAS and 12 DAS both in after 0 d CS ( Figure 9A,C) and in after 7 d CS ( Figure 9B,D). The coating would have been effective in reducing oxygen availability between the fruit and the air, reducing the activity of critical enzymes for anthocyanin synthesis, phenylalanine ammonia-lyase and flavanone synthase [12,14,33,34]. The main color changes in the plums were also correlated to the anthocyanin pigmentation as well as chlorophyll alterations [14], found to be involved in the low a* values and anthocyanins in coated fruits. However, coated strawberry fruit placed in low CO 2 concentrations showed increased levels of anthocyanin and total phenols, caused by the oxidation of ascorbic acid [31], probably due to their non-climacteric characteristic ethylene-insensitive pathway. Total flavonoid content increased at 12 DAS after 0 d CS ( Figure 9A) and 6 DAS after 7 d CS ( Figure 9B), with the highest levels of approximately 140 mg·g −1 in control fruits associated with oxidative stress at room temperature mentioned earlier in the literature [12,14,26,33,34]. , respectively, at 0, 3, 6, 9, and 12 days of room temperature storage (DAS). Bars represent error of the means, when larger than the dimension of the symbol. ns, *, **, and *** indicate nonsignificant and significant differences between control and coating treatments at p < 0.05, p < 0.01, and p < 0.001, respectively. Conclusions Immediately following harvest, plumcot fruits are susceptible to spoiling quickly during storage and distribution, significantly affecting fruit distribution to distant markets, and which is exacerbated by the lack of cold storage facilities in many developing countries [22,25,38]. Our research showed that a mixture of sucrose monoesters and volatile compounds could extend fruit shelf-life up to 9 days after 0 d CS by limiting the oxygen absorbed by the climacteric fruits compared with the control group, and could be easily used as a post-harvest treatment in plumcot fruits and other prunus fruit species during hot and humid weather. However, further research is needed to investigate the potential postharvest effects of the coating application at different concentrations, and to acquire more detailed information on why delayed application did little to influence the post-harvest performance of fruits affected by cold stress and internal ethylene biosynthesis.
2021-10-18T18:30:29.434Z
2021-09-27T00:00:00.000
{ "year": 2021, "sha1": "1e945f893e420c6e84dc385a5efd5f2882a478d1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/19/10737/pdf?version=1632748883", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "65556a5fb3146848ac785b762052ec98316052a8", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
16075989
pes2o/s2orc
v3-fos-license
Identification of Selective Inhibitors of the Plasmodium falciparum Hexose Transporter PfHT by Screening Focused Libraries of Anti-Malarial Compounds Development of resistance against current antimalarial drugs necessitates the search for novel drugs that interact with different targets and have distinct mechanisms of action. Malaria parasites depend upon high levels of glucose uptake followed by inefficient metabolic utilization via the glycolytic pathway, and the Plasmodium falciparum hexose transporter PfHT, which mediates uptake of glucose, has thus been recognized as a promising drug target. This transporter is highly divergent from mammalian hexose transporters, and it appears to be a permease that is essential for parasite viability in intra-erythrocytic, mosquito, and liver stages of the parasite life cycle. An assay was developed that is appropriate for high throughput screening against PfHT based upon heterologous expression of PfHT in Leishmania mexicana parasites that are null mutants for their endogenous hexose transporters. Screening of two focused libraries of antimalarial compounds identified two such compounds that are high potency selective inhibitors of PfHT compared to human GLUT1. Additionally, 7 other compounds were identified that are lower potency and lower specificity PfHT inhibitors but might nonetheless serve as starting points for identification of analogs with more selective properties. These results further support the potential of PfHT as a novel drug target. Introduction Malaria represents a major global health challenge and is estimated to be responsible for~216 million infections per year resulting in~655,000 deaths in 2010 (World Malaria Report 2011, http://www.who.int/malaria/world_malaria_report_2011/en/). Drug resistance continues to present a major obstacle to control of this disease, leading to the use of combination therapies [1]. The front line therapy is currently Artemisinin Combination Therapy, but this treatment is now threatened by the emergence of slow responding strains of the parasites [2,3]. Hence there is an urgent need to develop novel therapies that target different pathways from those disrupted by current drugs [4]. Furthermore, there is great interest in drugs that could be effective against multiple stages of the malaria life cycle [5] to prevent development of disease, to control disease pathology, and to prevent transmission from one infected individual to the next. One remarkable aspect of the physiology of malaria parasites is their complete dependence upon glucose uptake and glycolytic metabolism [6]. Because the parasites do not express a mitochondrial pyruvate dehydrogenase [7], they rely completely on glycolysis for glucose catabolism, thus generating only two ATP molecules per glucose. The Krebs Cycle and oxidative phosphorylation are not engaged for production of ATP from glucose. This inefficient use of glucose forces the parasite to transport large amounts of glucose to sustain viability and thus makes the parasite especially dependent on glucose uptake. Hence, inhibiting glucose import from the host's blood may be a novel therapeutic strategy. In 1999 Krishna and colleagues [8] cloned and functionally expressed the gene for the hexose/glucose transporter PfHT from Plasmodium falciparum. This permease is a member of the Facilitative Glucose Transporter or SLC2 family [9]. However, PfHT is highly divergent in sequence from all human orthologs, with GLUT1, the most closely related human ortholog, sharing only 28% amino acid identity with PfHT. This modest relatedness suggested that it might be possible to identify compounds that inhibit PfHT with high affinity without strongly inhibiting the orthologous human GLUTs. Subsequently, the Krishna laboratory demonstrated that the D-glucose analog 3-O-((undec-10-en)-1-yl)-D-glucose, also referred to as compound 3361, was able to selectively inhibit uptake of glucose by PfHT versus human GLUT1 [10]. Specifically, the K i for glucose uptake by PfHT was 53 μM while no significant inhibition of GLUT1 was observed at a 1 mM concentration of compound 3361. Compound 3361 also inhibited growth of intraerythrocytic P. falciparum parasites in vitro with an IC 50 value of 15.7 μM, and it induced a 40% reduction in parasitemia of mice infected with P. berghei parasites when administered at a dose of 25 mg/kg (i.p.) twice daily [10]. These results strongly suggested that PfHT provides an essential route for nutrient uptake in blood stream stage parasites. Subsequently, genetic evidence was advanced by two groups [11,12] demonstrating the inability to delete the PfHT gene, unless parasites had been first transfected with an episomal copy of the gene to provide complementation. These results supported the notion that PfHT is an essential glucose transporter for intraerythrocytic parasites. Additionally, studies applying compound 3361 to hepatic stage and ookinetes of P. berghei demonstrated strong inhibition of viability of both these liver and mosquito stages of the malaria life cycle [11,13], implying that PfHT and its orthologs in other species of malaria are indeed essential in multiple stages of parasite development. As indicated by Krishna and colleagues [14][15][16], these observations suggest that inhibiting the parasite PfHT without impairing function of human SLC2 transporters such as GLUT1 might be a promising strategy for development of drugs. Transporters represent the targets for 13% of currently FDA-approved oral drugs with known targets in humans [16], establishing that permeases are often 'druggable' proteins likely to contain binding pockets for small molecules that are usually unrelated in structure to their natural permeants. Although compound 3361 represents one such selective inhibitor, it is not a drug-like compound and is not considered a lead for drug development [16]. Hence, it is important to identify other non-sugar compounds that selectively inhibit PfHT and might be advanced toward novel therapeutic agents. One approach to identifying novel PfHT inhibitors is to screen libraries of drug-like compounds for those that selectively inhibit PfHT with high affinity. The challenge in implementing this approach is to develop an assay for transporter function that can be carried out in a high-throughput screening strategy. We have previously demonstrated the both PfHT and GLUT1 can be heterologously expressed in a glucose transporter null mutant (Δlmxgt1-3) of the parasitic protozoan Leishmania mexicana, and that these transgenic parasites (Δlmxgt1-3 [pPfHT] or Δlmxgt1-3[pGLUT1]) are auxotrophic for glucose [17]. In the current study we have employed a cell proliferation assay, using the DNA-binding dye SYBR Green, to screen two 'focused' libraries of compounds with demonstrated ability to inhibit growth of P. falciparum parasites in vitro. The first library is the Tres Cantos antimalarial compound set (TCAMS) that consists of 13,533 compounds (https://www.ebi.ac.uk/chemblntd) that inhibit growth of P. falciparum 3D7 intraerythrocytic parasites by 80% at 2 μM concentration [18]. The second library is the Malaria Box collection of 400 compounds [19] with demonstrated antimalarial activity (http://www.mmv.org/research-development/malaria-box-supporting-information) that was obtained from the Medicines for Malaria Venture. Following the primary screen for compounds that inhibited growth of Δlmxgt1-3[pPfHT] transgenic parasites, we applied secondary screens that directly measured glucose uptake to identify the limited subset of primary hits that inhibited growth of this reporter cell line by selectively inhibiting glucose import through PfHT compared to GLUT1. This multistep approach identified two compounds that were high affinity inhibitors of PfHT and low affinity inhibitors of GLUT1. Analysis of analogs uncovered a third compound with similar properties. Additionally, 7 low potency, low selectivity inhibitors of PfHT also emerged. These hits provide compounds with potential for drug development against the essential P. falciparum hexose transporter PfHT. Materials and Methods High Throughput Screening i. Parasite proliferation assay employing SYBR Green. To monitor proliferation of reporter cell lines in the presence of library compounds, 15 μL of DME-L medium [20] containing 5 mM glucose and 10% heat inactivated fetal bovine serum was dispensed into each well of 384-well microplates (black polystyrene, clear bottom, tissue culture treated, Corning) with a Matrix Wellmate liquid dispenser (Thermo Scientific). Stock compounds dissolved in dimethyl sulfoxide (DMSO) were pin-transferred (V&P Scientific) into the microplate to the desired final concentration using an automated robot arm. 15 μL of the reporter cell line, e.g., Δlmxgt1-3[pPfHT], at 2 x 10 6 /mL was added per well with the Wellmate dispenser. Microplates were incubated (Liconic) at 28°C and 5% CO 2 for 72 h. After incubation, 10 μL of the reading solution (5X SYBR Green prepared from a commercial 100X stock, SIGMA, 5% Triton-X in PBS) was added per well. Each plate was shaken at 1000 rpm for one min, incubated further at room temperature for 20 min, and then fluorescence was read (excitation 485 nm, emission 535 nm) with the Envision plate reader (PerkinElmer). All data processing and visualization, as well as chemical similarity and substructure analysis, was performed using custom programs written in the Pipeline Pilot platform (Accelrys, v.7.0.1) and the R program [21]. ii. Glucose uptake assay in 96-well plate format. To measure uptake of glucose in a medium throughput format, 100 μL/well of PBS was added to 96-well filter plates (Millipore) with the Wellmate dispenser. Control wells included either 0 mM (positive control) or 20 mM (negative control) of the competitive inhibitor fructose in PBS. Compounds were added with the Biomek FX automated system. 90 μl of a cell suspension (1.1 x 10 8 cells/mL) was added to each well with the Wellmate dispenser and left at room temperature for 5 minutes before adding 10 μL of the substrate (4 mM [ 3 H] D-glucose at 50 μCi/mL) to provide a final glucose concentration of 200 μM. After a 5 min incubation, the uptake reaction was stopped by adding 50 μL/ well of 4% formaldehyde, followed by incubation for~5 min. Cells were filtered and washed with a vacuum manifold (Millipore). Plates were dried overnight, 100 μL of scintillation fluid were added, and plates were then sealed and read on a TopCount NXT HTS from PerkinElmer. Data quality and analysis was performed using the GUItars program [22] and sigmoidal curve fitting with Pipeline Pilot platform (Accelrys, v. 7.0.1). ii. Uptake assays. Glucose uptake assays employed logarithmic phase L. mexicana promastigotes of the Δlmxgt1-3 hexose transporter null mutant strain expressing either PfHT or GLUT1 from an episomal expression vector [17]. Parasites (1x10 7 ) were resuspended in 100 μL phosphate buffered saline, pH 7.4. Prior to uptake, parasites were pre-incubated for 5 min with inhibitor, and uptake was initiated by adding 100 μL of 200 μM [ 3 H] D-glucose diluted to 4.0 μC i /mL. Uptake was terminated, typically after 1 min, by centrifuging the parasites through a layer of dibutyl phthalate, as described [23]. Control uptake assays employing [ 3 H] hypoxanthine, [ 3 H] uridine, and [ 3 H] L-proline were performed similarly. Analysis of kinetic data was performed using Graph Pad Prism 4.0b software (Graph Pad). Kinetic Analysis of Nutrient Uptake in the Presence of Inhibitors Growth Inhibition of P. falciparum and Human Foreskin Fibroblasts in vitro i. Biological assay. Two P. falciparum strains were used in this study and were provided by the MR4 Unit of the American Type Culture Collection (ATCC, Manassas, VA). Those two strains were the chloroquine sensitive strain 3D7 and the K1 strain that is resistant to chloroquine, pyrimethamine, and sulfadoxine. ii. Proliferation of parasites and EC 50 determinations. Asynchronous parasites were maintained in culture based on the method of Trager [24]. Parasites were grown in presence of fresh group O-positive erythrocytes (Key Biologics, LLC) in Petri dishes at a hematocrit of 4-6% in RPMI based media (RPMI 1640 supplemented with 0.5% AlbuMAX II, 25 mM HEPES, 25 mM NaHCO 3 (pH 7.3), 100 μg/mL hypoxanthine, and 5 μg/mL gentamycin). Cultures were incubated at 37°C in a gas mixture of 90% N 2 , 5% O 2 , 5% CO 2 . For EC 50 determinations, 20 μL of RPMI 1640 with 5 μg/mL gentamycin were dispensed per well in an assay plate (Corning 384-well microtiter plate, clear bottom, tissue culture treated, catalog no. 8807BC). 40 nL of compound, previously serially diluted in a separate 384-well white polypropylene plate (Corning, catalog no. 8748BC), was dispensed to the assay plate by hydrodynamic pin transfer (FP1S50H, V&P Scientific Pin Head) and then 20 μL of a synchronized culture suspension (1% rings, 4% hematocrit) was added to each well, thus making a final hematocrit and parasitemia of 2% and 1%, respectively. Assay plates were incubated for 72 h, and the parasitemia was determined by a method previously described [25]: Briefly, 10 μL of the assay solution in PBS (10X SYBR Green I, 0.5% v/v triton X-100, 0.5 mg/mL saponin) was added to each well. Assay plates were shaken for 1 min, incubated in the dark for 90 min, and read with an Envision (Perkin Elmer) spectrophotometer at Ex/Em of 485 nm/535 nm. EC 50 s were calculated with a custom program (RISE, Robust Investigation of Screening Experiments) using a four-parameter logistic equation. iii. Mammalian cell drug susceptibility assay. The BJ cell line (human foreskin fibroblasts), which express GLUT1 [26], was purchased from the ATTC and cultured according to recommendations. 1000 exponentially growing cells were plated per well (25 μL) in white polystyrene flat bottom sterile 384-well tissue culture treated plates (Corning), and incubated overnight at 37°C in a humidified 5% CO 2 incubator. DMSO inhibitor stock solutions were pintransferred (V&P Scientific) the following day. Plates were placed back in the incubator for 72 h incubation and equilibrated at room temperature for 20 min before addition of 25 μL Cell Titer Glo (Promega) to each well. Plates were shaken on an orbital shaker for 2 min at 500 rpm. Luminescence was read after 15 min on an Envision plate reader (Perkin Elmer). EC 50 values were calculated with a custom program (RISE, Robust Investigation of Screening Experiments) using a four-parameter logistic equation. Physical Properties and in vitro Pharmacokinetics i. Solubility assay. The solubility assay [27,28] was carried out on a Biomek FX lab automation workstation (Beckman Coulter, Inc.) using μSOL Evolution software (pION, Inc.) as follows: 10 μL of compound stock was added to 190 μL of 1-propanol to make a reference stock plate. Next, 5 μL of this reference stock plate was mixed with 70 μL of 1-propanol and 75 μL of PBS (pH 7.4) to make the reference plate, and the UV spectrum (250-500 nm) of the reference plate was read. Then, 6 μL of 10 mM test compound stock was added to 600 μL of PBS, pH 7.4, in a 96-well storage plate and mixed. The storage plate was sealed and incubated at room temperature for 18 h. The suspension was then filtered through a 96-well filter plate (pION Inc.). Next, 75 μL of filtrate was mixed with 75 μL of 1-propanol to make the sample plate, and the UV spectrum of the sample plate was read. Calculations were done using μSOL Evolution software based on the area under the curve (AUC) of the UV spectrum of the sample plate and the reference plate. All compounds were tested in triplicate. ii. Parallel artificial membrane permeability assay (PAMPA). A parallel artificial membrane permeability assay (PAMPA) [28,29] was conducted on a Biomek FX lab automation workstation (Beckman Coulter, Inc.) with PAMPA evolution 96 command software (pION Inc.) as follows: 3 μL of 10 mM test compound stock was mixed with 600 μL of PBS (pH 7.4) to make diluted test compound. Then 150 μL of diluted test compound was transferred to a UV plate (pION Inc.), and the UV spectrum was read as the reference plate. The membrane on a preloaded PAMPA sandwich (pION Inc.) was painted with 4 μL of GIT lipid (pION Inc.). The acceptor chamber was then filled with 200 μL of acceptor solution buffer (pION Inc.), and the donor chamber was filled with 180 μL of diluted test compound. The PAMPA sandwich was assembled, placed on the Gut-Box controlled environment chamber and stirred for 30 min. The aqueous boundary layer was set to 40 μm for stirring. The UV spectrum (250-500 nm) of the donor and the acceptor were read. The permeability coefficient was calculated using PAMPA Evolution 96 Command software (pION Inc.) based on the AUC of the reference plate, the donor plate, and the acceptor plate. All compounds were tested in triplicate. iii. Liver microsomes stability assay. The NADPH regenerating agent solutions A (cata-log#: 451220) and B (catalog#: 451200) and mouse liver microsomes (CD-1, mixture of male, catalog#:452701, and female, catalog#: 452702) were obtained from BD Gentest. The microsomal stability assay was carried out as described [29,30]. For each test compound, the mouse liver microsomal solution was prepared by adding 58 μL of concentrated mouse liver microsomes (20 mg/mL protein concentration) to 1.756 mL of 0.1 M potassium phosphate buffer (pH 7.4) containing 5 μL of 0.5 M EDTA to make a 0.6381 mg/mL (protein) microsomal solution. Each test compound (2.2 μL of 10 mM DMSO solution) was added directly to 1.79 mL of mouse liver microsomal solution and 90 μL was transferred to wells in 96-well plates (0, 0.25, 0.5, 1, 2, and 4 h time points each in triplicate). The NADPH regenerating agent was prepared by mixing 0.113 mL of NADPH regenerating agent solutions A, 0.023 mL of solution B and 0.315 mL of 0.1 M potassium phosphate buffer (pH 7.4) for each tested compound. To each well of the 96-well plate, 22.5 μL of the NADPH regenerating agent was added to initiate the reaction, and the plate was incubated at 37°C for each time point (0, 0.25, 0.5, 1, 2, and 4 h time points each in triplicate). The reaction was quenched by adding 225 μL of cold acetonitrile containing warfarin (4 μg/mL) as internal control to each well. All of the plates were centrifuged at 3220 g for 20 min and the supernatants (100 μL) were transferred to another 96-well plate for analysis on UPLC-MS (Waters Acquity UPLC linked to Waters Acquity Photodiode Array Detector and Waters Acquity Single Quadrupole Mass Detector) on an Acquity UPLC BEH C 18 1.7 μm (2.1x50 mm) column by running 90-5% gradient for water (0.1% formic acid) and acetonitrile (0.1% formic acid) in 2 minutes. The area under the single ion recording (SIR) channel for the test compound divided by the area under the SIR for internal control at 0 time concentration was considered as 100% to calculate remaining concentration at each time point. The terminal phase rate constant (ke) was estimated by linear regression of logarithmic transformed concentration versus time, where ke = slope x(-ln10). The half-life t 1/2 was calculated as ln2/ke. The intrinsic clearance (CL int,app ) = (0.693/in vitro t 1/2 ) x (1mL incubation volume/ 0.5 mg of microsomal protein) x (45 mg microsomal protein/gram of liver) x (55 g of liver/kg body weight) [31,32]. Development of an HTS assay against PfHT To identify selective inhibitors of PfHT, it was necessary to develop an HTS compatible assay that could identify compounds that inhibit the glucose uptake activity of PfHT. For this purpose, we expressed both PfHT and GLUT1 in a glucose transporter null mutant of L. mexicana, Δlmxgt1-3, and grew the transgenic parasites in high glucose (5 mM) DME-L medium [20] that lacks the alternate carbon source proline [33] so that proliferation of the reporter lines would be completely dependent upon uptake of glucose through either PfHT or GLUT1 [17]. Proliferation of reporter strains was monitored by DNA content using the fluorescent dye SYBR Green [25]. Initial plate uniformity assays (http://htsc.wustl.edu/Aids/NCGC_Assay_ Guidance_Manual.pdf) of the PfHT-expressing line in triplicate 384-well plates, using high (no phleomycin), medium (1.4 μM phleomycin) and low (1 mM phleomycin) growth conditions, resulted in Z' values [34] of 0.87, 0.87, and 0.89 (a Z' value of 1.0 would represent a perfect assay without error) and Coefficients of Variation for the high growth conditions of 3.3, 3.3, and 2.9% for each of the triplicate plates, indicating a robust HTS assay. A subsequent scaling screen of the~2000 compound MicroSource Discovery Spectrum Collection resulted in a Z value of 0.81 and a Coefficient of Variation of 5.0%. The average Z value for plates in the primary screens of the HTS described below was 0.8. Primary, secondary, and tertiary screens of the TCAMS library We first performed a primary screen of the 13,533 compound TCAMS library [18]. A flow chart for the screen is outlined in Fig 1. Since this screen was part of a project to capture hexose transporter inhibitors for multiple parasites, compounds were screened in duplicate at 3 μM Flow chart for screen of TCAMS library. The TCAMS library of 13,533 compounds with demonstrated growth inhibitory activity against intraerythrocytic Plasmodium falciparum parasites was screened by sequential criteria. The steps included: 1) proliferation inhibitory screen (>65% inhibition at 3 μM concentration or >20% differential inhibition among the three strains) of PfHT, LmxGT2, and GLUT1 reporter strains to produce 401 primary hits; 2) 96-well plate assays for compounds (20-30 μM) that inhibited uptake of 200 μM [ 3 H] D-glucose by 90%; 3) individual uptake assays for compounds (10 μM) that inhibited glucose uptake by 50%; 4) individual uptake assays for compounds (10 μM) that inhibited uptake of 100 μM [ 3 H] L-proline by 10%; 5) doseresponse curves for compounds that selectively inhibited uptake of glucose through PfHT versus GLUT1 (1 compound plus 1 additional hit that emerged from analysis of analogs). Numbers in parentheses represent the number of positive hits obtained after each sequential step. concentration employing 3 transgenic Δlmxgt1-3 lines expressing PfHT, the L. mexicana hexose transporter LmxGT2, and human GLUT1. Primary hits met one of two criteria for the average of duplicate samples: i) >65% inhibition of growth of each reporter line (140 hits), a cut off chosen on the basis of receiver-operator characteristics [35,36], ii) >20% differential inhibition of one line versus another (261 hits). This dual strategy will capture both broad-spectrum hexose transporter inhibitors and those selective for individual permeases. The 401 primary hits were subsequently screened in glucose uptake assays performed in 96-well plates. The 65 compounds that inhibited uptake of 200 μM [ 3 H] D-glucose by 90% when applied at 20-30 μM concentrations were designated secondary hits. Because of the relatively large scatter in uptake assays performed in a plate format, these 65 secondary hits were subsequently tested at 10 μM concentration in more accurate and reproducible individual glucose uptake assays [17], performed in triplicate, and the 9 that inhibited uptake by 50% were designated tertiary hits. These hits were also tested for inhibition of uptake of 100 μM [ 3 H] L-proline to remove compounds that non-specifically inhibit membrane transport processes (10% inhibition of proline uptake at 10 μM compound), giving 6 validated hits that are selective inhibitors of glucose transport (Compounds 1-6 in Fig 2). Screen of the Malaria Box Library Screening was also performed on the 400 compound Malaria Box library. Since this library was much smaller than the TCAMS, it was screened initially at a concentration of 20-30 μM using the 96-well glucose uptake assay to provide 47 secondary hits with 90% inhibition of uptake by the PfHT reporter strain. These hits were rescreened in individual glucose and proline uptake assays at 10 μM to reveal 3 validated hits (Compounds 7-9 in Fig 2) that gave 75-100% inhibition of glucose uptake without significant inhibition of proline uptake. Compounds that Inhibit Glucose Uptake Through PfHT The compounds in Fig 2 are selective inhibitors of glucose versus proline uptake when glucose uptake is mediated by PfHT. To determine whether any of these compounds are selective inhibitors of PfHT versus the human glucose transporter GLUT1, we performed dose-response curves for inhibition of uptake of 100 μM [ 3 H] D-glucose using the PfHT and GLUT1 reporter lines (Figs 2 and 3). Fig 2 reveals that there was one compound from the TCAMS library, TCMDC-125163 or Compound 1, and one from the Malaria Box library, GNF-Pf-3184 or Compound 7, that potently inhibited glucose uptake by PfHT (IC 50 < 50 nM) but weakly inhibited uptake by GLUT1 (IC 50 > 3 μM), providing respectively an 82-fold and 71-fold selective inhibition of PfHT versus GLUT1. Hence, these two compounds are high potency selective inhibitors of PfHT versus GLUT1. To begin to establish structure-activity relationships (SAR) for the top selective inhibitors, we performed dose-response curves for 6 analogs of Compound 1 (Fig 4). Two Compound 1 analogs, Compounds 11 and 13, did exhibit selectivity toward PfHT (65-fold and 94-fold, respectively), although their IC 50 values for the parasite permease (0.94 and 0.32 μM, respectively) were higher than for Compound 1 (0.039 μM). Thus Compound 1 and its analogs present an encouraging SAR profile representing a range of activities over the spectrum of analogs examined, consistent with the notion that it interacts with a specific target. Both compounds strongly inhibit glucose uptake through PfHT but are poor inhibitors for uptake of hypoxanthine, uridine, or proline through the innate nucleobase, nucleoside, and proline transport systems of L. mexicana (Fig 5). Fig 2. To probe this question, we examined 6 analogs of Compound 8 (Fig 6), one of the hits that has higher potency toward GLUT1 than PfHT. While none of these analogs proved to be a high potency selective inhibitor of PfHT, some of them (Compounds 16 and 19) did reverse specificity, showing somewhat better selectivity for PfHT versus GLUT1 (e.g., IC 50 ratios for GLUT1/PfHT of 2-3 versus 0.14, a 15-22-fold improvement in specificity). To determine whether any of the scaffolds represented by Compounds 2-6, 8, and 9 could produce high affinity selective inhibitors of PfHT, it will be necessary to carry out an extensive structure-activity studies for each scaffold, studies that are beyond the scope of the current investigation. Kinetics for Inhibition of PfHT by Compounds 1, 7 and 13 To further investigate the mode of action of the top hits from the screens, we performed kinetic analysis for inhibition of glucose uptake through PfHT. Table 1. Statistical analysis of the V max and K m values, determined at different concentrations of each compound, using one-way ANOVA, reveal that the V max values decrease significantly as each compound concentration increases, but the K m values do not differ significantly from each other as compound concentration increases. These results suggest that Compounds 1, 7, and 13 act as non-competitive inhibitors of glucose and hence likely interact at distinct sites on PfHT from the glucose binding pocket. However, most cases of nominal non-competitive inhibition are more likely to represent 'mixed inhibition' in which K m values do change somewhat with increasing inhibitor concentration [37]. None of these compounds acts as a competitive inhibitor that would need to compete with the abundant glucose supply present in the bloodstream of a malaria-infected host. The K ic and K iu values (inhibition constants for binding of inhibitor to free and glucose-bound PfHT, respectively, determined graphically [37] using the K m and V max values in Table 1) are also in the nM range for Compounds 1, 7 and 13, indicating high affinity inhibition. Inhibition of Growth of Intraerythrocytic P. falciparum in vitro To determine whether selective inhibitors of PfHT are effective against malaria parasites in vitro, Compounds 1, 10, 12, and 13 were tested in dose-response format for their ability to inhibit proliferation of intraerythrocytic P. falciparum parasites (drug sensitive 3D7 and multidrug resistant K1 strains, Table 2). The most potent inhibitor of PfHT, Compound 1, was also the most potent inhibitor of proliferation, with an EC 50 of 1.4 μM and 0.97 μM respectively for 3D7 and K1 strains. In contrast, Compounds 10, 12, and 13 that are much poorer inhibitors of PfHT exhibited EC 50 values of >10 μM for growth inhibition. None of the 4 analogs showed substantial toxicity toward human foreskin fibroblasts (BJ cells). Although the number of compounds examined was limited, compound 1 that exhibited the highest potency and selectivity for inhibition of PfHT also possessed the highest potency for inhibition of P. falciparum growth in vitro. In vitro Pharmacokinetic Profiles The solubility and permeability of the top hit Compound 1 and its analogs (Table 3), were measured using in vitro assays in order to determine likelihood of oral absorption. Likewise, metabolic half-life following exposure to mouse liver microsomes was studied to predict liver clearance in vivo. Compound 1 exhibits low aqueous solubility, high membrane permeability, but also relatively high membrane retention, and relatively high rate of metabolism by liver microsomes. Thus, while exhibiting strong performance in cellular assays, the compound series will require further optimization prior to validation in vivo. Discussion In this study, focused libraries of compounds with demonstrated proliferation inhibitory capacity against P. falciparum intraerythrocytic stage parasites were employed, because any hits would constitute compounds with demonstrated efficacy against the parasite. Screening of the TCAMS and Malaria Box libraries each generated one PfHT selective inhibitor, and characterization of several analogs uncovered another selective inhibitor with 8-fold lower affinity, Compound 13 in which the three methoxy groups of Compound 1 have been replaced by a metacyano group. These results warrant further examination of additional Compound 1 analogs to search for additional derivatives of this scaffold that might have higher affinity, increased selectivity, or improved physico-chemical properties as a lead for further drug development. In addition, the initial SAR studies on Compound 1 revealed a range of potencies and specificities for different analogs (Fig 4). These results suggest the possibility that some analogs of other low potency, low specificity inhibitors (Fig 2) might emerge as structurally related high potency, high specificity compounds. The advantage of this approach is that it could expand the number of scaffolds representing selective inhibitors of PfHT that might be advanced as alternate drug leads. A variety of commercially available analogs exist for each of the compounds in Fig 2, and these can be tested for inhibition of PfHT and antimalarial activity. Indeed, a limited SAR study of 6 analogs of Compound 8 (Fig 6) did detect compounds with improved specificity for PfHT, suggesting that further SAR on this and other scaffolds may be warranted. Currently, the most promising scaffold for further development is represented by Compound 1, for which the structure should allow facile modification to pursue a medicinal chemistry program. Similar advantages apply to other scaffolds represented in Fig 2, should high affinity selective inhibitors emerge from initial SAR studies. In contrast, Compound 7 probably represents a probe-like rather than a drug-like compound, that is, a compound with chemical and structural properties that are not optimal for drug development but may be useful for biochemical analysis. Nonetheless, the identification of multiple selective inhibitors of PfHT provides proof of principle that non-substrate analogs can provide potent antagonists that act at sites different from those occupied by substrates. Compound 1 inhibits proliferation of both multi-drug resistant and non-resistant intraerythrocytic P. falciparum with EC 50 values of~1 μM (Table 2). In contrast, Compounds 10, 12, and 13, which are all less potent inhibitors of glucose uptake through PfHT, inhibit growth of parasites in vitro with significantly weaker potency. The correlation between inhibition of PfHT and growth suppression suggests that Compound 1 may exert its proliferation inhibitory effect through inhibition of PfHT, as anticipated for a compound that impairs the ability of the parasite to acquire an essential nutrient. Hence, PfHT is a potential pharmacological target of Compound 1, although rigorous demonstration of this point will require further studies. The in vitro pharmacokinetic and physical properties of Compound 1 (Table 3) reveal that is has relatively low aqueous solubility and high membrane permeability but also a high percentage of retention in the membrane, when compared to the range of properties for the four control drugs. The metabolic stability is relatively low and the intrinsic clearance is relatively high, compared to verapamil. The high membrane permeability and low solubility place these compounds in Biopharmaceutics Class II [38] and indicates that such compounds would likely require appropriate formulation to provide reasonable oral bioavailability. These properties also suggest that for Compound 1 to be progressed toward a potential anti-malarial lead, a medicinal chemistry program would need to focus on decreasing membrane retention and increasing metabolic stability. Alternate scaffolds may emerge if any of the analogs of low potency, low specificity PfHT inhibitors exhibit higher potency and specificity toward PfHT. This approach could provide multiple scaffolds of diverse structure that could be explored for optimal drug-like properties. Overall this study confirms that it is possible to identify high affinity, high selectivity inhibitors of PfHT by screening focused libraries of drug-like compounds that inhibit growth of P. falciparum. Similarly, screens of larger unfocused libraries may identify other such PfHT inhibitors that could be optimized subsequently for anti-malarial activity. This work further suggests that PfHT may have potential as a novel drug target for control of this pathogen of global importance.
2016-05-12T22:15:10.714Z
2015-04-20T00:00:00.000
{ "year": 2015, "sha1": "98041c588e05e26c03eaec1fdf3994262271feb4", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0123598&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "98041c588e05e26c03eaec1fdf3994262271feb4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
257728852
pes2o/s2orc
v3-fos-license
Recent Advancements in Mosquito-Borne Flavivirus Vaccine Development Lately, the global incidence of flavivirus infection has been increasing dramatically and presents formidable challenges for public health systems around the world. Most clinically significant flaviviruses are mosquito-borne, such as the four serotypes of dengue virus, Zika virus, West Nile virus, Japanese encephalitis virus and yellow fever virus. Until now, no effective antiflaviviral drugs are available to fight flaviviral infection; thus, a highly immunogenic vaccine would be the most effective weapon to control the diseases. In recent years, flavivirus vaccine research has made major breakthroughs with several vaccine candidates showing encouraging results in preclinical and clinical trials. This review summarizes the current advancement, safety, efficacy, advantages and disadvantages of vaccines against mosquito-borne flaviviruses posing significant threats to human health. Introduction Mosquito-borne flaviviruses, members of the genus Flavivirus, are small spherical particles transmitted by various species of mosquitoes [1]. The genome of these viruses is nearly 11 kb in length [2][3][4][5], encoding a single polyprotein precursor which includes three structural proteins, capsid (C), premembrane (prM) and envelope (E), and seven nonstructural (NS) proteins, NS1, NS2A, NS2B, NS3, NS4A, NS4B and NS5) [6,7]. Of these proteins, E protein is found to induce protective immune response, and the significance of antibodies targeting flaviviral E during antiviral protection has been verified in several animal models [8][9][10][11]. Dengue virus (DENV), Zika virus (ZIKV), West Nile virus (WNV), Japanese encephalitis virus (JEV) and yellow fever virus (YFV) are five typical mosquitoborne flaviviruses which could cause severe infectious diseases and, thus, show human medical significance [12]. The wide distribution of accordant mosquito vectors in the world results in the pandemic transmission of these flaviviruses and increases the infection risk of human populations [12,13]. Live Attenuated Virus Vaccines Due to significantly low manufacturing costs [15,62] and comprehensively long-lasting immune response after a single dose [15,59,61,62], live attenuated virus vaccines (LAVs) are the most successful vaccines at preventing infections. An estimated 63% of FDA-approved vaccines are LAVs [59,63]. LAVs against mosquito-transmitted flaviviruses are developing quickly in recent years. YFV-17D is considered to be the safest and most effective LAV against YFV infection [6]. SA14-14-2 is another effective LAV for JEV. One novel LAV technology, ChimeriVax (Chambers et al., St. Louis University Health Sciences Center, USA), has recently manifested considerable prospects in the development of vaccines against mosquito-borne flaviviruses [61]. ChimeriVax is composed of a backbone cDNA clone of a flavivirus, of which the prM-E segment is substituted with the corresponding one from the virus selected for the vaccine. Basically, the backbone comes directly from the virus of LAVs or engineered wild-type (WT) virus attenuated through mutations in vitro [6]. RepliVax (Widman et al., University of Texas Medical Branch, USA) is another emerging technology for the development of flavivirus vaccines based on deletion mutants [64]. These mutant viruses are not capable of assembling and releasing progeny virus particles (single-cycle virus) or replicating their viral genomes [15]. Generally, the flavivirus RepliVax vaccines are constructed by the deletion of C protein from single-cycle viruses, producing subviral particles [15,65] (SVPs, contain E and prM/M protein with antigenically significance, but delete C protein or the viral genome to contribute to the non-infectivity [25,66]). Similarly to ChimeriVax, RepliVax vaccines are relatively safe [15], with the ability to continuously stimulate immune responses [67], and do not require adjuvants, which are used in inactivated virus vaccines (INVs) [64]. However, the reversion of vaccine strains to increased virulence is the main problem related to live virus vaccines. In addition, LAVs are not recommended for the immunocompromised and pregnant women [62]. DENV A tetravalent recombinant LAV called Dengvaxia ® (Sanofi Pasteur, France) [18,68,69] is the only approved DENV vaccine [17,[70][71][72][73]. The vaccine is produced by the chimeric technology, in which the prM and E genes of the YFV-17D backbone were substituted by those of DENV 1, 2, 3 and 4 [17,68,69,[73][74][75][76]. Studies indicated that Dengvaxia ® was safe and induced neutralizing antibodies against DENV of the existing four serotypes [77]. Although Dengvaxia ® has been licensed for DENV prevention, global uptake is hampered due to the limitation of only administering it to seropositive persons who are aged above 9 years old, not to mention the sequential 3-dose schedule [21,78]. Takeda Pharmaceutical Company (Tokio, Japan) has developed a chimeric tetravalent LAV candidate (TDV) which contains the attenuated DENV-2-PDK53 strain with the prM and E genes of three other serotypes of DENV [18,71,75,79]. TDV has been confirmed to elicit both long-term humoral [80][81][82][83] and cellular immune response [83][84][85]. A recent phase III clinical trial has shown that TDV was well tolerated [71,[86][87][88] and induced 62% seroconversion for four DENV serotypes [18,71]. The primary efficacy data indicated promising results, showing 80.2% overall vaccine efficacy, 95.4% preventive efficacy of severe dengue forms, and 74.9% efficacy in dengue seronegative patients [71]. LATV, another tetravalent LAV for dengue, was developed by the National Institute of Allergy and Infectious Diseases (NIAID) in the US [69,89]. It was composed of four attenuated DENV serotype strains. The phase I-III clinical trials verified that LATV was within an acceptable safety profile [69] and could elicit potent humoral immune responses against all DENV serotypes, with a long-lasting antibody persistence. In addition, the cross-reactive T cell-mediated immune responses are also activated comprehensively by the vaccine [79,88,[90][91][92][93]. These data indicate that LATV may serve as an efficient and safe DENV vaccine for application in individuals covering all ages, regardless of DENV infection status before the vaccination [94]. The Walter Reed Army Institute of Research (WRAIR) and GlaxoSmithKline (GSK) have recently codeveloped a tetravalent dengue LAV called TDEN. The TDEN resulted in 100% seroconversion toward all the four DENV serotypes in flavivirus-primed subjects in phase II clinical trials. It has been shown that a two-dose administration regimen was safe and effective in volunteers ranging from 12 months to 50 years of age [69]. ZIKV ChinZIKV (Beijing Institute of Microbiology and Epidemiology, China) is developed with the JEV SA14-14-2 strain by substituting its prM-E genes with the one from the ZIKV FSS 13025 strain. It has been shown that ChinZIKV protects mice and rhesus macaques against ZIKV challenges, and ZIKV intrauterine transmission was also blocked in pregnant mice [27,30,95,96]. ChimeriVax-Zika is also a chimeric vaccine which contains ZIKV prM-E antigens in the YFV-17D backbone. One dose of the vaccine induced robust ZIKV neutralizing antibodies which protected the mice against ZIKV challenges [27]. Recently, a ZIKV LAV called rZIKV/D4∆30-713 has entered phase I clinical trial. It is also a chimeric vaccine, which introduces the DENV-4 as the backbone for the delivery of the ZIKV prM-E as the antigens. This chimeric recombinant vaccine was highly attenuated in A129 mice (type I interferon receptor knockout mice) and provided protective immunity against ZIKV challenges [27,96]. WNV Researchers from the Beijing Institute of Microbiology and Epidemiology have engineered a chimeric JEV/WNV virus (ChinWNV) cDNA which used a subgenomic replicon SA14-14-2 as the genetic backbone, and the prM and E genes of JEV were substituted with the corresponding WNV genes. Studies have shown that one dose of ChinWNV rapidly induced strong humoral immune responses in mice, providing solid protection against the lethal WT-WNV challenge [97]. The US's NIH has also developed a recombinant WNV LAV (WN/DEN4∆30), which comprises the prM and E genes of WNV NY99 and the C and NS genes of rDEN4∆30 [47, 98,99]. The results of clinical trials revealed that WN/DEN4∆30 was safe and immunogenic [98]. Another ChimeriVax WNV vaccine, WNV02 was generated based on the YFV 17D backbone, in which the prM/E genes of the YFV 17D were substituted with those of the WNV NY99 [59,100,101]. Monkeys vaccinated with WNV02 received potent protection against the lethal WT-WNV challenge [15,59] and with high safety [34] and this vaccine has entered phase II clinical trials [102]. A RepliVAX WNV vaccine (RepliVAX WN) containing a large deletion in the C protein gene [66] was demonstrated to provide complete protection against lethal WT-WNV challenges under a single inoculation at the lowest dose in mouse and hamster models [15,66,103]. JEV SA14-14-2, developed in China [6,13,104], is based on the WT strain SA14, which was attenuated by 114 passages in primary hamster kidney cells [49,105]. Studies indicated that SA14-14-2 was highly immunogenic and a two-dose regimen induced almost 100% seroconversion and protection against the infection [106]. SA14-14-2 is the most widely applied LAV against JEV in the world [49,106] and has been successfully utilized in China with more than 100 million doses administered [105]. However, the theoretical risk for the attenuated SA14-14-2 virus to reverse into a highly virulent strain has, to some extent, limited its extensive application globally to prevent JEV infection [106]. ChimeriVax-JE, a chimeric JEV vaccine, employed YFV 17D as the backbone to express the prM and E of SA14-14-2 as the antigens [6,86,106]. Numerous preclinical studies have suggested that the ChimeriVax-JE vaccine was well tolerated and immunogenic, and that it provides protection against JEV infection in mice and non-human primate (NHP) models. The clinical trials indicated that a single dose of ChimeriVax-JE generated almost the same effect as three doses of JE-VAX (an inactivated JEV vaccine) to produce near-complete seroconversion in the subjects [107][108][109][110]. Ishikawa et al. developed a RepliVAX-JE.2 vaccine based on RepliVAX technology, which expresses the JEV prM and E genes in place of the WNV ones. It was indicated that RepliVAX JE.2 was well tolerated and provided full protection for mice suffered from lethal JEV challenges [65,111,112]. YFV Yellow fever is the third human infectious disease (after smallpox and rabies) to be controlled by vaccination. A live attenuated YFV (17D), derived from a human isolate (Asibi), has been applied safely and effectively as an LAV for more than 80 years [113], and administered in more than 600 million people around the word [114]. One dose of the vaccine can generate long-lasting specific neutralizing antibodies even for 35 years in some vaccinees, explaining its long-term efficacy [14,114,115]. Another effective YFV LAV (FNV) was produced by attenuating the French viscerotropic strain of YFV in a mouse brain with 128 passages. It was applied in Francophone Africa and showed strong immunogenicity. Unfortunately, FNV was found to increase the neurotropic potential, making it unsafe to utilize in children [14]. FNV was terminated after the last doses were used in 1981 [105,114,116]. Inactivated Virus Vaccines Inactivated virus vaccines (INVs) are chemically or physically inactivated whole virions or subunits of viruses [38,117]. The INVs are impossible to revert to a more pathogenic phenotype while containing the whole viral antigens [27], which can induce a balanced antibody response [62,118].The protection of INVs is mainly based on the ability of viral surface proteins, which can induce neutralizing antibodies. The immunogenicity of INVs is greatly enhanced when the antigen is presented in particulate form (virions or SVPs) [119,120]. However, two major disadvantages of INVs include high expense and the need for repeated vaccination [27], both of which make vaccination difficult to achieve in endemic areas [62]. DENV A tetravalent dengue INV was developed by the WRAIR with the GlaxoSmithKline adjuvant systems. This vaccine resulted in robust humoral responses and balanced neutralizing antibody responses for all four DENV serotypes [121]. ZIKV A purified inactivated ZIKV vaccine, ZPIV, was produced by inactivating the PRV-ABC59 strain with formalin and by being given an aluminum adjuvant [27,31,96,122]. One dose of ZPIV protected all mice from challenges with the ZIKV-BR strain. Two doses of ZPIV enabled monkeys to remain immune against ZIKV even after a year of vaccination [96,123]. Moreover, the antibodies from immunized monkeys could block ZIKV infection in mice and monkeys in a dose-dependent manner. Currently, ZIPV has entered a phase I clinical trial [27]. The formalin-inactivated MR766 strain was used to develop the ZIKV INV(PIV) [124]. It has been shown that PIV provided full protection against a lethal ZIKV challenge in AG129 mice models after two-dose administration. Importantly, the serum transferred from immunized rabbits also provided protection in mice models [27,96]. JEV JE-VAX, a licensed INV for JEV, was produced by inactivating the JEV Nakayama strain with formalin [50,128]. The original virus was replicated in a mouse brain, and the inactive virus was purified by ultracentrifugation [129]. This INV was confirmed to induce robust immune response [130][131][132][133]. However, as an INV, a high cost and the need for repeated administration are still limitations. Moreover, JE-VAX was also associated with some serious allergic and neurologic side effects [134][135][136][137][138]. Production of this vaccine ceased in 2006 due to the above drawbacks, and remaining stocks were depleted in 2011 [139,140]. IC51, an investigational INV against JEV, comprises 6 µg of purified, inactivated SA14-14-2 absorbed to 0.1% aluminum hydroxide [104]. It has been demonstrated that IC51 was considerably immunogenic, highly safe and well tolerated [141]. Two doses of the vaccine induced high protective antibody titers equivalent to three doses of JE-VAX [142]. CVI-JE, a freeze-dried INV, was based on the inactivated Beijing P-3 strain of JEV. The high safety and immunogenicity of CVI-JE has been demonstrated in clinical trials and two doses of the vaccine can induce 100% seroconversion rates [48]. More than seven million doses of CVI-JE have been administered in China since it was registered in 2008 [48,106]. Nucleic Acid Vaccines Nucleic acid vaccines generally contain DNA and RNA vaccines [67,75]. DNA vaccines are commonly produced by cloning a promoter and the gene encoding the antigen of the vaccine into a plasmid [27] and they have several advantages over other types of vaccines, including the ability to induce intracellular antigen processing for adaptive immunity [75], the impossibility of the reversion to pathogenic phenotype, stability in extreme temperatures for long periods, and that they are easy to manufacture and low cost [6,62,75,143]. In addition, the DNA vaccines combination is not adversely affected by pre-existing antibodies or the replicative efficiency of each monovalent component [144,145]. However, the possibility of integrating it into the human genome and causing autoimmune diseases makes vaccination with a DNA vaccine a safety risk [6]. Furthermore, the immunogenicity of DNA vaccines is relatively weak in immunized human hosts [143]. Due to the minimal risk of integration into a host genome and a higher safety than DNA vaccines, mRNA technology has been used widely in the development of vaccine for mosquito-borne flavivirus [27,96]. mRNA vaccines utilize the biosynthesis process of host cells to express viral proteins of interest. Moreover, mRNA vaccines are stable in cells due to the untranslated regions at the 5 and 3 end, and they can also selectively activate innate immune responses by natural modifications [146,147]. DENV The U.S. Army Medical Research and Material Command (AMRDCk, WRAIR, NMRC and Vical Inc., Fort Detrick, MD, USA) cloned four plasmids which encode the E and prM of DENV 1, 2, 3 and 4 into the VR1012 plasmid and produced a tetravalent DENV DNA vaccine (TVDV) [148]. The high safety and good tolerability of TVDV has been demonstrated in clinical trials and it induced dose-dependent anti-DENV IFN-γ responses [69]. Importantly, this DNA vaccine can eliminate some viral interference, which has been noted with some DENV LAV S vaccines [75]. ZIKV The DNA vaccine candidate, termed GLS-5700, was the first Zika vaccine to enter clinical trials [27,31]. The GLS-5700 vaccine was developed by integrating the gene sequences of the prM and E of numerous ZIKV strains [96]. The study suggested that the vaccine could induce high levels of neutralizing antibodies and provide solid protection against ZIKV challenges in mice and NHPs [25]. GLS-5700 was well tolerated in clinical trials and three-dose regimens resulted in high-binding antibody titers to ZIKV-prM/E in all subjects. The antibodies of most volunteers could suppress ZIKV infection in vitro [25]. The other two DNA vaccines, VRC5283 [149] and VRC5288 [150], have been used in NHPs and have shown high ratios of seroconversion. The VRC5283 has been advanced into phase II studies in America [27,31,96]. The ZIKV mRNA vaccines, mRNA-1325 [151] and 1839 [152], were produced by enveloping synthetic RNA molecules with lipid nanoparticles. The experiments have shown that mRNA-1839 provided comparable protection, but higher levels of plasma and memory B cells associated with ZIKV-specific neutralizing antibodies, when compared to DNA vaccine VRC5283. These two vaccines have now entered phase I clinical trials [27]. Self-amplifying mRNA (SAM) is a novel technology for the development of mRNA vaccines [153,154]. The SAM vaccine contains an engineered genome of alphavirus responsible for the replication of vaccines and the genes encoding target antigens [155]. A fever dose of a SAM vaccine can induce stronger immune responses due to the replication characteristic and greater stimulation of the adaptive immune response by the double RNA generated during the replication process [156] than other mRNA vaccines [157]. WNV A WNV DNA vaccine (pCBWN), containing the genes of viral prM and E, has been shown to result in a strong immune response in horses and full protection in mice when challenged with WNV [158]. Hall et al. developed a plasmid DNA vaccine (pKUN1 plasmid DNA), encoding the full-length RNA of the Kunjin strain. This vaccine was demonstrated to elicit cross-reactive antibodies that could suppress both the New York strain and the Kunjin strain of WNV. The previous results indicated that 0.1-1 µg of pKUN1 plasmid DNA provided solid protection against a lethal challenge with the New York strain of WNV or the Kunjin strain. These data suggest that pKUN1 plasmid DNA may be a promising vaccine candidate to control WNV infection [159]. Other Types of Vaccines Other types of mosquito-borne flaviviral vaccines include viral vector vaccines, subunit protein vaccines and virus-like particle (VLP) vaccines. The live viruses, including replication-deficient adenovirus (AV), replication-competent measles virus (MV) and vaccinia virus [27,96], in viral vector vaccines, enable them to infect host cells and result in robust immune responses [160]. Similarly to LAVs, viral vector vaccines cannot be administered to pregnant women or immunocompromised people due to the risk of reversion to increased virulence [27,161]. Subunit protein vaccines are safer to produce and administer than LAVs and INVs since they do not contain any replicative virus [14,37]. Moreover, subunit protein vaccine technology is particularly required for DENV vaccine development as it can activate the immune responses against different serotypes of DENV [37]. Self-assembly properties of viral structural proteins were used to produce VLP vaccines. VLP vaccines contain multiple genes of target viral structure proteins, while lacking the ability of replication [27], which makes them more immunogenic and safe [27,162,163]. DENV Two complex recombinant AV vector vaccines were produced to express the prM and E proteins of DENV-1, 2 (cAdVaxD12) or DENV-3, 4 (cAdVaxD34). The two vaccines could elicit humoral and cellular immune responses against DENV1-4 from 4 to 10 weeks following primary vaccination [75,143]. For the purpose of avoiding the cold chain and the risk of reversion to pathogenicity, researchers continue to be interested in the study of subunit vaccines [14]. A recombinant protein vaccine for DENV(V180), developed by Hawaii biotech (Merck & Co., Inc., Honolulu, HI, USA), comprised the ectodomains of the E protein of DENV1-4 [18,75]. The genes of viral E protein were amplified by RT-PCR methods and then cloned into pMtt1Xho vector. Studies have indicated that V180 induced neutralizing antibodies to all four DENV and protected rhesus macaques from viremia following the WT-viral challenge [75,164] and clinical trials confirmed that the V180 also induced neutralizing antibodies in volunteers [69,75]. ZIKV Vesicular stomatitis virus (VSV) is particularly suitable to be developed into a multivalent vaccine due to its compatibility for foreign genes [165,166]. The nature hosts of VSV are livestock; thus, VSV can induce strong systemic immune responses in human populations due to the absence of pre-existing immunity [165,167]. Li et al. developed an attenuated recombinant VSV-based vaccine which generated the prM, E and NS1 of ZIKV. This vaccine candidate induced specific neutralizing antibodies, and T cells mediated immune responses in single-dose immunized mice, which offered full protection against a ZIKV challenge [166]. In addition, AV and MV vector systems are also introduced for ZIKV vaccine development, and are recently entering clinical trials. The AV 26 vector-based vaccine which carries ZIKV M and E (Ad26. ZIKV.001) was demonstrated to possess high immunogenicity. Antibodies from the immunized recipients protected mice against a lethal ZIKV challenge [27, 168,169]. A chimpanzee AV system was also applied to generate a ChAdOx1 Zika vaccine, which expressed the prM and E antigen [170]. The efficacy of the vaccine was detected in mice and proved to be 100% protective against ZIKV infection, offering full immunity and reducing viremia and viral dissemination in target organs, such as the brain and ovaries [27]. The AV vectors provide an effective delivery platform due to their minimal immunity in human beings. MV, a live attenuated RNA virus, is among the most secure and efficient human vaccine vectors yet [171]. Two MV vector-based ZIKV vaccine candidates (MV-Zika and MV-ZIKA RSP), which express the prM and E as antigens, are under evaluation in clinical trials. MV-ZIKA RSP was found to be effective in a mouse model to protect the fetus by lowering the viral load during ZIKV infection [172]. A Modified Vaccinia Ankara system (GeoVax) was also utilized to produce a novel vaccine candidate, which delivered ZIKV NS1 as the antigen, perfectly reducing the cross-reacting antibodies and the incidence of antibody-dependent enhancement (ADE) [173]. This vaccine elicited protection against ZIKV challenges in a way that was different from antibody neutralization [27]. To et al. developed a recombinant protein vaccine, which expressed the prM and E of the French Polynesian strain (Accession # KJ776791). The vaccine, in combination with an adjuvant, induced neutralizing antibodies and protected the mice from viremia. [174]. Another recombinant protein vaccine was produced by fusing the E domain III region to the C terminal Fc region of human IgG. Studies have demonstrated that this vaccine could induce strong immune responses and provide solid protection in multiple animal models [175]. More importantly, this vaccine was effective in pregnant CD-1 (ICR) mice models with fetal protection [96,176]. Recently, a ZIKV vaccine expressing the prM, E and NS1 was shown to induce significantly strong immune responses in mice [177]. The first reported vaccine candidate, Zika-VLP, was composed of the constructs that expressed the structural C-prM-E to form the VLP and the nonstructural NS2B-NS3 to catalyze the cleavage of structure proteins, finally producing mature VLP to induce host immunity [178]. The efficacy of the vaccine was verified in mice to induce similar antibody response patterns to the ZIKV INV control. Nevertheless, the amount of specific neutralizing antibodies elicited by the Zika-VLP vaccine was much higher than the ones elicited by INV control. Moreover, no ADE was observed in DENV infection by challenging the Zika-VLP, indicating that dysfunctional responses, such as cross-reacting antibodies induction, were not triggered by the displayed epitopes of Zika-VLP [27]. Immune sera generated from immunized mice were found to protect immunodeficient AG129 mice against ZIKV infection. ZIKVLPs, produced by the University of Wisconsin, was also a vaccine candidate. It could generate potent neutralizing antibodies, reduce viremia in BALB/c mice and elevate the survival rate of AG129 mice after ZIKV challenges [179]. An optimized Zika-VLP vaccine was designated by expressing dimerized E to assemble the VLP of C-prM-E [180]. This optimization helped to develop the envelope dimer epitopes (EDE), which were key to eliciting higher neutralizing antibodies against ZIKV and DENV in mice compared to the VLP with WT sequences [181,182]. The modification was also demonstrated to decrease the ADE risk in in vitro studies [178,183,184]. It was believed that antibodies targeting EDE could prevent viral E from experiencing the conformational change of forming a trimer, thus suppressing viral membrane fusion [185]. VLP-based vaccines are relatively safe due to their inability to replicate in host cells, and will be an ideal option for immunocompromised people or pregnant women [186]. WNV The recombinant ALVAC ® -WNV vaccine, based on a modified live recombinant canarypox virus (vCP2017) backbone, expressed the prM/E genes of the WNV NY99 strain, and was formulated in a carbomer adjuvant [59,187]. With the absence of replication in humans and a high attenuation, the vaccine is fairly safe [188]. ALVAC ® -WNV could elicit WNV-specific neutralizing antibodies in a variety of mammals and provide protection against viremia [189]. MVSchw-sEWNV, a recombinant MV vaccine, expressed the E protein of the WNV IS-98-ST1 strain [171,190]. Vaccination with MVSchw-sEWNV could result in high WNVspecific neutralizing antibody titer and provide protection against a lethal challenge with WNV in CD46-IFNAR mice models. Further experiments indicated that MVSchw-sEWNV was safe and could induce WNV-neutralizing antibodies in squirrel monkeys after a onedose administration. [190]. Another viral vector vaccine for WNV was developed by cloning the gene sequence of E protein of the WNV LSU-AR01 strain into VSV [191].The vaccine provided 90% protection against the lethal challenge with LSU-AR01 virus in mice models [47]. YFV With high safety and immunogenicity, YFV-17D is considered to be the most successful mosquito-borne flavivirus vaccine; however, it has been demonstrated that YFV-17D can cause viscerotropic disease and neurotropic disease, which are life-threatening adverse events [192][193][194][195]. Despite the fact that these side effects are rare, there is still a great need to develop safer and more effective vaccines due to the viruses' high fatality rate [192,196]. The modified vaccinia virus, Ankara (MVA), is one of the most commonly used antigen delivery vectors that is highly attenuated and cannot replicate in humans. Ju lander et al. cloned the genes of YFV PrM and E into an MVA vector and developed YFV vaccine MVA-BN-YF. Studies indicated that MVA-BN-YF induced robust humoral immune response and provided solid protection comparable to that of YFV-17D in YFV-infected hamsters. Importantly, the sera of immunized hamsters can also protect naïve hamsters from lethal infection wth YFV, confirming that high-titer neutralizing antibodies were elicited by MVA-BN-YF. These data suggest MVA-BN YF may represent a safe alternative to YFV-17D [192]. Discussion Mosquito-borne flaviviruses, some of the most critical human pathogenic arboviruses worldwide, have seriously affected public health in a number of endemic and/or epidemic regions [197]. These viruses cause a broad spectrum of diseases in humans including fever, encephalitis, meningitis and hemorrhage [198]. In recent years, mosquito-borne flavivirus infections have emerged at an alarming rate worldwide [52]. Effective vaccines are still the best approach to control these diseases due to the absence of effective anti-viral drugs. During recent decades, breakthroughs have been made in the development of flavivirus vaccines ( Table 1). The live attenuated YFV-17D and JEV-SA14-14-2 vaccines are regarded as available, safe and efficient mosquito-borne flavivirus preventatives. YFV-17D, in particular, is considered an excellent model to help develop effective vaccines against flaviviruses [199]. As for the other mosquito-borne flaviviruses, Dengvaxia ® has been licensed as a dengue vaccine, and many vaccines (TDV, LATV, TDEV, DPIV and TVDV) for DENV have entered clinical trials (Table 1). There are also vaccines for ZIKV (rZIKV/D4∆30-713, ZIPV, GLS-5700, MV-Zika, MV-Zika RSP and Ad26.ZIKV.001, VCR5283) and WNV (WN/DEN4∆30), which have been evaluated in clinical trials (Table 1). Nevertheless, there are no effective vaccines against ZIKV and WNV [37] being used in clinics. YFV and JEV vaccines are already available; however, the ongoing vaccination efforts could hardly prevent infection in risk areas due to a lack of financial resources. With the exception of smallpox (for which humans are the only hosts), the eradication of mosquito-borne flaviviruses might never be realized because of the wide distribution of and the difficulty inherent in controlling mosquitos, which means vaccination against mosquito-borne flaviviruses will be a long and continuous process. Moreover, the administration of some effective vaccines has been restricted because of many factors. For example, YFV-17D and SA14-14-2 are not suitable for immunocompromised people and pregnant women, and Dengvaxia ® is only recommended to seropositive persons who are above 9 years old [21,78]. These limitations have led to a significant decrease in vaccine coverage. Furthermore, the development of vaccines against mosquito-borne flaviviruses is very difficult on account of some special requirements. For example, one of the most important target groups for the ZIKV vaccine is pregnant women, which creates a unique safety requirement for the vaccine. In addition, the DENV vaccine must focus on tetravalent formulations due to the absence of cross-protection between the four DENV serotypes. Thus, the successful research and development of a mosquito-borne flavivirus vaccine needs to balance various aspects, including the vaccine components, route of vaccination, target population, expense and social financial resources [6]. Some novel technologies, including ChimeriVax, RepliVax, SAM and subunit protein vaccines, have been used as models to construct more effective vaccines and have shown encouraging results in vitro and in vivo, even in clinical trials. These technologies provide promising prospects for the control of mosquito-borne flaviviruses. In vivo (animal) [124] In vivo (animal) [180] Funding: This research received no external funding.
2023-03-25T15:05:14.874Z
2023-03-23T00:00:00.000
{ "year": 2023, "sha1": "b17efdf5a7adda82a5e58b7b84eb8bd89b259747", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/15/4/813/pdf?version=1679551531", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b8fc92c69abc2e7af3d0188b0914402d6fe411d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
234359214
pes2o/s2orc
v3-fos-license
Sustainability Assessment of a District-Wide Quality Improvement on Newborn Care Program in Rural Rwanda: A Mixed-Method Study Background: Neonatal mortality continues to be a global challenge, particularly in low- and middle-income countries. There is growing work to reduce mortality through improving quality of systems and care, but less is known about sustainability of improvements in the setting post initial implementation. We conducted a 12-month sustainability assessment of All Babies Count (ABC), a district-wide quality improvement project including mentoring and improvement collaborative designed to improve quality and reduce neonatal mortality in two districts in rural Rwanda. Methods: We measured changes in key neonatal process, coverage, and outcome indicators between the completion of ABC implementation and 12 months after the completion. In addition, we conducted 4 focus group discussions and 15 individual in-depth interviews with health providers and facility and district leaders to understand factors that influenced sustainability of improvements. We used an inductive, content analytic approach to derive six themes related to the ABC sustainability to explain quantitative results. Findings: Twelve months after the completion of ABC implementation, we found continued improvements in core quality, coverage, and neonatal outcomes. During ABC, the percentage of women with 4 antenatal visits increased from 12% to 30% and remained stable 12 months post-ABC (30%, p = 0.7) with an increase in facility-based delivery from 92.6% at the end of ABC to 95.8% (p = 0.01) at 12-month post-ABC. During ABC intervention, the 2 districts decreased neonatal mortality from 30.1 to 19.4 deaths per 1,000 live births with maintenance of the lower mortality 12 months post-ABC (19.4 deaths per 1,000 live births, p = 0.7). Leadership buy-in and development of self-reliance encouraging internally generated solutions emerged as key factors to sustain improvements while staff turnover, famine, influx of refugees, and unintended consequences of new national newborn care policies threatened sustainability. Interpretation: Despite discontinuity of key ABC support, health facilities kept the momentum of good practices and were able to maintain or increase the level of prenatal, neonatal quality of care and outcomes over a period of 12 months following the end of initial ABC implementation. Additional studies are needed to determine the longer-term sustainability beyond one year. INTRODUCTION Over the past two decades, countries across the globe have made substantial improvements reducing under-five mortality overall, yet 3 million newborn deaths and 2.6 stillbirths [1] still occur every year. Most (99%) of these deaths occur in low-and middle-income countries (LMICs) where high neonatal mortality rates are often associated with poor quality of maternal and neonatal care services [1,2]. Most of these deaths could be avoided with simple and affordable evidencebased interventions [3,4]. To achieve the health-related sustainable development goal (SDG) of reducing preventable deaths of newborns to at most 12 per 1,000 live births by 2030, high burden countries must effectively and sustainably implement evidence-based interventions in maternal and newborn care that could reduce neonatal deaths by as much as 71% annually [5,4]. Governments and their partners are currently implementing programs to achieve these goals, but little is known of how these improvements can be sustained beyond the intervention period and factors related to their shortand long-term sustainability. Understanding these factors and implementation strategies is critical for policy makers, program designers, and funders as they seek to ensure communities will continue to benefit after projects end. Despite impressive progress over the past decade reducing under-five mortality, neonatal mortality in Rwanda remains high at 20 newborn deaths per 1,000 live births [6]. Given high rates of facility delivery and that 90% of neonatal deaths occur within the first 48 hours after birth, facility-focused interventions will play a critical role in reducing neonatal mortality. The Rwandan government, together with Partners In Health/Inshuti Mu Buzima (PIH/IMB), a non-profit organization working in Rwanda since 2005, designed and implemented "All Babies Count" (ABC), a district-wide quality improvement program to eliminate preventable neonatal deaths in two districts in the Eastern Province of the country [7,8]. The ABC intervention package included neonatal care provider trainings, limited equipment support, clinical mentorship and quality improvement (QI) coaching, and the establishment of districtwide QI learning collaboratives (Figure 1). The ABC program was successfully implemented in all health facilities across the 2 districts (Kirehe and S. Kayonza), including 24 health centers and 2 district hospitals [7]. A pre-post evaluation of ABC showed significant improvement in multiple measures of antenatal, delivery, and postnatal quality of care and in district neonatal mortality [7]. Interpretation: Despite discontinuity of key ABC support, health facilities kept the momentum of good practices and were able to maintain or increase the level of prenatal, neonatal quality of care and outcomes over a period of 12 months following the end of initial ABC implementation. Additional studies are needed to determine the longer-term sustainability beyond one year. In this paper, we describe the work to integrate key elements of the ABC program into routine systems and the results of a mixed-methods study to evaluate 12 months sustainability of improvements seen during the ABC program. We also explored factors related to the success and challenges of sustainability from the perspective of key stakeholders, including health providers, mentors, and local leaders. STUDY SETTING AND CONTEXT ABC was implemented between July 2013 and September 2015 in two districts, Kirehe and Southern Kayonza (S. Kayonza), located in Eastern Province of Rwanda. The two districts have been partnered with PIH/IMB and the Rwanda MOH to support health systems since 2005 in all 24 health centers (16 in Kirehe and 8 in S. Kayonza) and 2 district hospitals (1 in each district). The two districts serve a population of more than a half million. In Kirehe, three health centers (Kigarama, Rwantonde, and Mahama) were opened after the launch of ABC in October 2013. In S. Kayonza, one health center (HC) (Rwinkwavu HC) does not provide maternity services so was excluded from some indicators, such as number of pregnant women with antenatal care visits and number of deliveries. During ABC implementation, efforts were made to integrate selected ABC activities, including neonatal care mentorship and peer-to-peer learning, into existing district routine activities. No additional equipment, supply, or training related to neonatal care were given to facilities after the 12-month completion of ABC by PIH/IMB, and all health facilities in the two districts continued to be managed by the MOH. STUDY DESIGN We used a mixed-methods approach with convergent sequential design to study the sustainability of improvements and ABC activities 12 months after the completion of the ABC program (September 2015) and factors related to the success and challenges of sustainability from the perspective of key stakeholders including health providers, mentors, and local leaders. This included a quantitative evaluation using a pre-post design to capture changes in key processes, coverage, and neonatal outcomes from the end of ABC to 12 months post-intervention. We also conducted focus group discussions and in-depth interviews with health workers at the facilities, ABC team members, and local leaders to understand which activities sustained and which ones dropped. We captured their experiences and opinions on neonatal practices, how they changed over time after the ABC implementation phase, and factors they thought were important to increase or threaten sustainability of improvements and best practices. DATA COLLECTION AND SAMPLING Quantitative: Indicators used for measurement were the same as those used during the ABC program and for the primary evaluation [7]. These were selected based on the globally accepted process and impact outcomes for maternal and newborn health and were aligned with Rwandan priorities and context. Three time periods (three months each) were considered in our analysis: 1) Baseline (pre-ABC), July to September 2013; 2) ABC Endpoint July to September 2015 (9-12 months of ABC interventions); and 3) 9-12 months post-intervention end (July to September 2016). Data were extracted from the existing national and district data management systems facility (Health Management Information System for Facility Data (HMIS)) and community health worker (CHW) (Système D'Information Sanitaire Communautaire (SIS-Com)). We used patient charts from facilities and records from CHWs to validate data from HMIS and SIS-Com respectively. In case of differences between the two data sources, priority was given to facility and community health records as the primary source. To complement routinely collected facility data, ABC program monitoring data were used for indicators not reported in HMIS or SIS-Com. Facility surveys conducted during the ABC program were also repeated to measure availability of essential equipment and medications. Qualitative: Four qualitative focus group discussions (FGD) with eight participants each were conducted in Kirehe and S. Kayonza with nurses and midwives providing maternal or neonatal care at health centers and nurses providing maternal or neonatal care at hospitals. Participants were purposely selected based on their experience and active participation in the neonatal and maternity services (at least two years). A facilitator and note-taker were hired and trained to lead the discussions. Semi-structured individual interviews were conducted with 20 participants, including 2 ABC mentors, 4 MOH mentors, 1 program director, 6 nurses (3 from each district including 2 from health centers and 1 from hospitals), 4 directors of health centers, 1 district hospital director, 1 political leader, and 1 data manager. Recorded interviews occurred at a place that ensured privacy. For both focus group discussions and interviews, questions focused on the following topics: (1) experiences with ABC program; (2) challenges faced during implementation; (3) views of impact on patients; and (4) experiences during the 12 months after implementation. Focus group discussions and interviews were conducted in the local language of Kinyarwanda, were audiorecorded with permission, and were transcribed, then translated into English for data analysis. ANALYSIS Quantitative analysis: Monthly data were aggregated for each of the three time periods and analyzed across both districts and individually. Point estimates were calculated for each indicator at each time period, and change that occurred over time was analyzed. Wilcoxon signed rank test was used to test the significance of differences for continuous variables where possible. Improvement was measured from baseline to ABC endpoint, and sustainability was measured from ABC endpoint to 12 months post-ABC completion. Excel and Stata v.14 (College Station, TX: StataCorp LP) were used respectively for data management and analysis. A p-value <0.05 was considered significant. Qualitative analysis was inductive and employed a content analytic approach [9]. A subset of transcripts was open coded for the purposes of developing a codebook; the resultant codebook was then used to direct code all transcripts. Coded data were examined inductively to derive emergent themes that were developed into conceptual categories that describe either key barriers or facilitators of ABC implementation and sustainability. Each category consisted of a descriptive label, an operational definition, and key illustrative quotes. These initial categories were reviewed and revised to create a set of six final themes that appear in the Results section below. After independently analyzing the quantitative and qualitative data, content areas represented in both data sets were identified, and all the results were compared, contrasted, and synthesized. The separate results were then interpreted. The themes from focus group discussions and individual interviews were assessed in terms of their ability to explain quantitative results. CHANGES IN POST-ABC ACTIVITIES AND PERFORMANCE Twelve months following the completion of ABC, Kirehe and Kayonza District health leadership sustained selected key ABC activities although at lower intensity ( Table 3). Mentorship was sustained but with a decrease in frequency, from an average of 1 mentoring visit per month in S. Kayonza at the end of ABC to 0.8 visit per month-with a similar drop in Kirehe (0.8 visit per month to 0.4). Peer-to-peer learning through learning sessions as well as regular review of key neonatal data were incorporated into routine monthly coordination meetings. Despite this decrease, the improvements seen in most key quality and coverage indicators during ABC were sustained and/or increased 12 months after the completion of ABC (see Table 2). The availability of essential medicine and functioning equipment to deliver newborn care remained high in both districts despite no further direct supplies from PIH/IMB. In Kirehe, the availability of essential medications, which had increased during ABC (from 67% to 78%, p = 0.04) kept increasing significantly 12 months after the end of ABC (83%, p = 0.003) ( Table 1). The percent of women giving birth at a health facility, already high in both districts at the end of ABC (92.6%), kept increasing significantly 12 months after the end of ABC (95.8%, p = 0.01). Similarly, the percentage of pregnant women who completed four antenatal care (ANC) visits remained stable across the two districts. Aggregated across both districts, improvements were seen in labor management and newborn care after birth as we compare the end of ABC and 12 months later: steroid administration for preterm labor (41.7% to 58%, p = 0.01), systematic monitoring of danger signs for newborn 24 hours after birth (98.7% to 99.3%, p = 0.7). The percentage of babies with immediate skin-to-skin after delivery, already high at the end of ABC (97.4%), reduced slightly (96.2%, p = 0.2). Similarly, neonatal mortality dropped from 30.1 deaths per 1,000 live births at the beginning of ABC to 19.4 deaths per 1,000 live births. The decrease in mortality was maintained 12 months post-ABC at 19.4 deaths per 1,000 live births (p = 0.7). Building capacity and ownership for QI was also a goal of ABC. At the end of ABC, there were 47 active QI projects towards improving maternal and neonatal care across both districts. While there was a decrease in number, 12 months post-ABC, QI projects were still being implemented. All facilities (100%) in S. Kayonza and 60% in Kirehe were implementing at least one QI project per health center. In addition, a number of key improvements initiated by health center teams during the ABC period continued 12 months later ( Table 1). QUALITATIVE FINDINGS Six major themes linked to the success or challenges of sustainability of ABC and related improvements were inductively identified. Quotes reflecting each theme are summarized in Table 4. Leadership "buy-in" and "ownership" (Facilitator): This emerged as one of the powerful factors to facilitate various activities aimed at improving the quality of neonatal care. Participants not only defined leadership as those occupying top positions of political or administrative authority, but also extended the concept of leadership to include any person across the continuum of care whose engagement and enthusiasm pushed forward the neonatal care agenda. Participants noted the presence and active participation of district authorities during learning collaborative sessions and coordination meetings after ABC period. Young leadership (Facilitator and Challenge): Interviewees described how national, district, and local leaders were ambitious, optimistic, confident, and committed, which was both a facilitator and barrier. Leaders were described as committed and ready to do whatever it takes to reduce neonatal mortality. However, most were young and often lacked managerial and technical experience. This combination of ambition and lack of managerial expertise meant that they were easily pulled in many directions, and participants explained that this led to a loss of focus on neonatal health, which affected the momentum after the ABC period. Turnover of trained staff (Challenge): The mobility of staff was identified as an important challenge to sustainable improvements. Health care workers who chose to leave did so for multiple reasons. During the implementation phase of ABC, selected nurses and midwives at the hospital and health centers were trained in maternal and neonatal care services. As they became skilled and experienced, they became more attractive to other health facilities, particularly those in urban areas. The departure of the trained staff for other health facilities left their previous facilities with critical staff shortages, which negatively impacted services. Stuff happens! (Challenge): Both unplanned adverse events and unexpected great opportunities have an impact on neonatal outcomes. Participants described unexpected events that impacted neonatal outcomes both over the course of ABC implementation and during the one year postintervention period. These events were not anticipated during the design of the ABC program and could not have been predicted, so stakeholders were not always prepared to respond to them. The events included a prolonged drought causing famine in Eastern province, the influx of Burundian refugees in Kirehe District, unintended consequences of new policies from district or central level, such as introduction of an antenatal care fee, which decreased the number of women attending antenatal care services. Development of self-confidence fostered internally generated solutions (Facilitator): Participants noted the emergence and importance of locally made solutions to improve antenatal care services, delivery management, and post-natal care. Innovative, individualized ideas that could be integrated into ABC practices at each site were generated by health centers and hospital teams with the support of ABC mentors. These internally generated solutions started during the ABC implementation period and continued in the year that followed the end of the program. They took different forms depending on the health facility. These included 1) partnering with community health workers (CHWs) and local leaders to identify pregnant women and encourage them to attend ANC visits or 2) improving internal communication across different services within the health center, 3) motivating women to attend health centers for antenatal care and deliveries by providing non-financial incentives, such as clothes, to the pregnant women who attended the center during the first ANC visit during their first trimester. Gap between high demand in maternal and neonatal services and adequate human resource (Challenge): Interviewees explained that staff shortages and work overload made it difficult for staff to meet expected goals for newborn care services, a barrier to having achieved and sustained improvements. Community demand steadily increased, and a growing number of women began seeking care at the health centers or hospitals. Almost all pregnant women came to deliver at the health center or hospital. Unfortunately, human resources did not increase to match this demand, which negatively impacted the quality of services provided to pregnant women. THEMES EXAMPLES OF QUOTES Leadership "Buy-In" and "Ownership" "During the second learning collaborative in Kirehe, the mayor, the vice mayor and the entire leadership team were present. They stayed for the whole day. They took notes and asked questions" I was surprised to see the Mayor sitting there for hours. And you can see that he was very interested and curious to see what is happening. We were feeling supported." -ABC mentor "…Kabarondo titulaire is a nun. She also a student. She is doing her bachelor degree in nursing, so she is very busy. One day I went to Kabarondo for mentorship and found the maternity register not completed and showed the maternity team what is missing. I gave feedback to the team and the titulaire was there. Two weeks later, when I went back there, I found everything in place. The titulaire despite her busy schedule, took time to fix that herself. You understand that she cares about the work. She did not delegate, she did the work, because she cares." -ABC mentor Benefits and Drawbacks of Young Leadership "My impression is that you have a country with a relatively young leadership. Relatively young in the sense of it's a maturing leadership. On one hand, you have this complexity of an incredibly ambitious Ministry of Health at the highest level of leadership. On the other hand, you have different layers of young leaders, who are dynamic but with lack of managerial and technical experience." -ABC program coordinator "…Rwanda is a young nation. From its background of genocide and civil war, they had to build everything from scratch. We did a lot of things from 1994, almost miracles. But you understand that we are still learning how to build this country." -Political Leader, Kirehe Losing Trained Staff as a Key Barrier to Sustainable Improvements "…There are HC like Rwantonde, Kabuye and Gahara, there are factors behind. Normally there were some key persons at the health center we were used to work together. During mentorship, we were working with everyone, but they are specific people that were driving the improving. Unfortunately, most of these people left. They are no longer at the HC either because of the issue of documents (diplome) or just they left because they found another job." -ABC mentor "This was an error to build capacity on just one person. This was the case in Kabuye Gahara and in other HC. For other Health centers, you can find nurses who have been working in that department(maternity) for many years, so they bring a new staff, it is a problem because he/she will start from scratch." -ABC mentor Unplanned Adverse Events Go with Unexpected Great Opportunities to have an Impact on Neonatal Outcomes. "I think [The change in ANC coverage] makes totally sense (changes in antenatal care coverage). If you look the baseline and during ABC, there was no famine yet or maybe it was not much. But it got worsened in the end of 2015 and 2016 which is the period of sustainability. Maybe the drought started a bit early, but people started to feel it in that period. And it was terrible. I know people who flee the country to Tanzania and Uganda because of the famine." -Health center nurse "There was a time, there were few pregnant women coming in our health center for antenatal care visit. And when we asked community health workers, they explained that people are very hungry because of the drought. Husbands prefer to flee the country to find jobs. They leave their wives and children home. It is difficult to go to travel to health center when you are hungry and alone at home." -Nurse from Health center in S. Kayonza "At Kirehe hospital, normally we receive between 150 to 200 of women in maternity per month. But when Burundian refugees came in, we were overwhelmed with pregnant women with more than 300 of admissions per month. Most of the women came from Mahama." -Nurse at Kirehe hospital Building Selfreliance, Encourages internally Generated Solutions "In Ndego, the health center worked with CHWs to identify women who missed their appointment-to sensitize them to return to health center for follow-up. They also talk to the women in the village to understand why they don't return. This has happened in Ruramira too. Even though there were some quality issue but they had a register where they document women who missed their appointment to track them later. They used to call them. This helped a lot to increase the number of women who come for antenatal care visit." -ABC mentor Gap Between High Demand in Maternal and Neonatal Services and Adequate Human Resources "If the hospital leadership knew deliveries went up from 200 to 300 in a month because the flux of refugees with only 12 nurses in maternity working day and night for 7 days a week, they should be able to recruit more nurses to adjust. Otherwise, what are you expecting for the 12 nurses to do in front of the increasing demand? Same for neonatology, a lot has been done during intensive phase and mortality went down significantly. There was a time 1 nurse alone had to manage 30 sick babies. Of course, mortality will go high. So, if deliveries went high, the number of sick babies will go high too, and the number of babies with birth asphyxia too. So, I think if the hospital can't take measures to hire more personnel you understand there is an issue. So here I may say that health providers did not do a good job, but it is not their fault. The problem is they were overwhelmed."-Health center Nurse in Kirehe DISCUSSION Despite decreased direct support and challenges, core components of ABC were sustained, and health facilities continued to initiate and/or maintain existing QI projects and sustainability or improvement of improvements in maternal and neonatal performance measures. We identified both program design and external contextual factors from key informants that contributed to the resilience of the improvement. These included strong leadership and ownership as well as confidence and ability in developing locally made solutions. We also identified factors that emerged as challenges to sustaining the progress and sustainability of the ABC-associated progress, such as the influx in refugees in Kirehe and the famine following a prolonged drought. Despite these challenges, the improvement associated with ABC were also sustained in key process and system quality, coverage, and outcome indicators. While there is a wealth of literature discussing sustainability of public health programs [10,11], we did not find any that explore sustainability after the end of a QI intervention to improve systems in neonatal care quality in limited-resource settings. Our findings on sustainability of improvements may have been related to the design and implementation strategies, which were consistent with factors associated with sustainability from the literature. ABC program was intentionally designed to include a number of these important strategies, such as engagement with a range of key stakeholders (Ministry of Health authorities, district leaders, representative from health facilities through regular joint meetings) throughout the project or combination of trainings, supervision/mentorship, and learning collaboratives, as well as strong and ongoing local community engagement. For example, a systematic review of a wide range of evidencebased programs showed that a program is likely to be sustained if it reported greater community engagement, communication with key stakeholders, knowledge of the program's logic model by the stakeholders, and sustainability planning [11,12]. In addition, strategies targeting improving performance of health care providers are likely to produce sustained outcomes if they combine training, supervision, and group problem solving [13]. The ABC program also demonstrated flexibility to be adapted over time in response to new factors, a strategy also identified as a factor influencing sustainability of improvements [14]. Dickson and colleagues assessed health-systems bottlenecks and strategies to accelerate high impact interventions to reduce neonatal mortality in 13 high-burden countries. While the study did not focus on sustainability of programs, the recommendations for effective program implementation (including dynamic leadership, community empowerment, and capacity building for rural health workforce to upgrade relevant clinical skills for care) were shared with the design of ABC and our findings of factors associated with sustainability of the ABC impact [15]. Our study contains a number of limitations. The absence of comparison groups from other districts in Rwanda with similar settings does not allow us to make any inference on the sustainability compared with other districts' neonatal quality and mortality. The specific findings may also have limited generalizability to settings outside rural districts and Rwanda; however, the broader principles are likely applicable to other interventions and settings. While there is no commonly accepted time point for defining when a program is sustained, we can only report on findings at 12 months for this study. In conclusion, we found sustainability at 12 months of the key process improvement including supply chain and care delivery as well as mortality improvements. Key implementation strategies identified as supporting the sustainability and consistent with the literature included stakeholder engagement throughout planning and implementation, intentional integration of core components of the ABC into MOH roles and function, and building capacity for QI. Additional research is needed to further understand longer-term sustainability of improvement activities and quality and how to ensure the needed resources and strategies are integrated into QI initiatives and maintain the advances critical to continue to reduce neonatal mortality.
2021-05-12T05:16:28.922Z
2021-04-19T00:00:00.000
{ "year": 2021, "sha1": "fb3efe65c2baf17e17df332cacc34b1b1411438d", "oa_license": "CCBY", "oa_url": "http://www.annalsofglobalhealth.org/articles/10.5334/aogh.3205/galley/3232/download/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb3efe65c2baf17e17df332cacc34b1b1411438d", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
18801762
pes2o/s2orc
v3-fos-license
Clinical Interventions in Aging Dovepress Caregiver Burden, Productivity Loss, and Indirect Costs Associated with Caring for Patients with Poststroke Spasticity Objective: Many stroke survivors experience poststroke spasticity and the related inability to perform basic activities, which necessitates patient management and treatment, and exerts a considerable burden on the informal caregiver. The current study aims to estimate burden, productivity loss, and indirect costs for caregivers of stroke survivors with spasticity. Methods: Internet survey data were collected from 153 caregivers of stroke survivors with spasticity including caregiving time and difficulty (Oberst Caregiver Burden Scale), Work Productivity and Activity Impairment measures, and caregiver and patient characteristics. Fractional logit models examined predictors of work-related restriction, and work losses were monetized (2012 median US wages). Results: Mean Oberst Caregiver Burden Scale time and difficulty scores were 46.1 and 32.4, respectively. Employed caregivers (n=71) had overall work restriction (32%), absenteeism (9%), and presenteeism (27%). Caregiver characteristics, lack of nursing home coverage, and stroke survivors' disability predicted all work restriction outcomes. The mean total lost-productivity cost per employed caregiver was US$835 per month ($10,000 per year; 72% attributable to presenteeism). Conclusion: These findings demonstrate the substantial burden of caring for stroke survivors with spasticity illustrating the societal and economic impact of stroke that extends beyond the stroke survivor. Introduction Stroke is the fourth-leading cause of death and a leading cause of long-term disability in the US. 1,2 Most stroke survivors need home care, which is usually provided by a family member, but the continuous, long-term commitments required of caregivers are often associated with psychological and financial burdens. [3][4][5] Nearly one-fifth (17%) of stroke survivors experience poststroke spasticity (PSS), a disorder of the sensorimotor system characterized by a velocity-dependent increase in muscle tone. 6 PSS frequently causes pain and interferes with hand and arm positioning, which affects grasping, self-care, and other activities of daily living of the stroke survivor. 7,8 Spasticity-related stiffness and discomfort can interfere with physical activities such as ambulation, hygiene, and dressing in addition to psychological consequences on mood and self-esteem. 9 The inability of a stroke survivor with PSS to perform basic activities exerts a considerable burden on the informal caregiver, defined as someone close to the stroke survivor who is not hired to provide caregiving services. 7,8 Significant humanistic burden such as depression and anxiety, among caregivers of PSS has been reported. 10 submit your manuscript | www.dovepress.com Dovepress Dovepress 1794 ganapathy et al Furthermore, informal caring for stroke survivors is associated with both humanistic costs including decrements in health-related quality of life and indirect economic costs such as restrictions in work productivity. While the humanistic burden has been well documented, the economic costs associated with providing care for PSS patients are not well documented. Therefore, the purpose of this study was to assess and quantify the economic burden of providing care for stroke survivors with PSS. Methods study design Participants were recruited from the 2008 US National Health and Wellness Survey (NHWS) and the Ailment Panel of Lightspeed Research (Warren, NJ, USA). NHWS is an annual, cross-sectional, self-administered Internet-based survey given to a sample of 75,000 adults (18 years of age) who are identified through Lightspeed Research's Internet panel. Members of the Internet panel are recruited through opt-in emails, coregistration with panel partners, e-newsletter campaigns, online banner placements, and both internal and external affiliate networks. A stratified random sample procedure was implemented for NHWS so that the final sample was representative of the demographic composition of the adult US population. NHWS (2008) respondents who reported that they care for a stroke survivor (n=854) were invited via email to participate in this study. To increase sample size, members of the Lightspeed Research Internet panel who did not participate in NHWS were also contacted, and those who did report providing care to a stroke survivor were invited to participate in this study. Caregivers were eligible to participate if they provided care for a friend or a family member with PSS who experienced spasticity of the upper or lower limbs for 6 months and received no payment (or reimbursement by a government agency or insurance) for providing care. All caregivers provided informed consent. The study was reviewed and approved by Essex Institutional Review Board (Lebanon, NJ). Caregiver characteristics Caregiver demographics and clinical characteristics obtained included sex, age, ethnicity, education, household income, employment status, relationship to stroke survivor, symptoms experienced in the past month (eg, headaches, nervousness or anxiety, stomach pain), and number of comorbidities. Caregiver health outcomes were assessed using the mental component summary (MCS) and physical component summary (PCS) and health utilities (SF-6D) scores derived from the Short Form-12, Version 2 (SF-12v2), 11 and the Patient Health Questionnaire-9. 12 stroke survivor characteristics Caregivers provided characteristics of the stroke survivors under their care, which included age, sex, race/ethnicity, employment status, insurance status, symptoms experienced (eg, abnormal posture, inability to sleep), and body areas affected by spasticity. Caregivers also rated the level of disability that stroke survivors experienced, using the Disability Assessment Scale (DAS). The DAS is a validated scale that consists of four domains -hygiene, pain, dressing, and impact of limb posture on daily living, each of which was rated on a Likert-type scale (0= no disability, 1= mild disability, 2= moderate disability, and 3= severe disability). 13 Caregiver burden The Oberst Caregiving Burden Scale (OCBS) 14,15 and the Bakas Caregiving Outcomes Scale (BCOS) 16 were used to assess the burden of providing care. The OCBS is a 15-item instrument that rates caregiving tasks based on time spent (1= none, 5= a great amount) and difficulty of task (1= not difficult, 5= extremely difficult). The 15 items of each subscale are then summed (range, 15-75; higher scores indicate greater caregiver burden). The BCOS is a 15-item instrument that measures life changes as a consequence of caregiving, with each item scored on a 7-point response scale (1= changed for the worst, 7= changed for the best). Scores are summed (range, 15-105; lower scores indicate greater caregiver burden). Work productivity and activity limitation Work productivity was assessed using the validated Work Productivity and Activity Impairment (WPAI) questionnaire, 17,18 which was tailored to focus specifically on the impact of caregiving. The WPAI items were used to generate percentages (0%-100%) that quantify absenteeism (percentage of time missed from work), presenteeism (percentage of restriction while at work), overall work restriction (percentage of total restriction due to either absenteeism or presenteeism), and overall activity limitation (percentage of limitation in daily activities) due to caregiving responsibilities, with higher values indicating greater limitation. Only those currently employed (full-time, part-time, or selfemployed) were asked about work productivity, although all caregivers were asked about activity limitation. 1795 Poststroke spasticity caregiver burden human capital method Costs due to lost productivity were calculated for each caregiver using the human capital method (HCM), according to how much a disease decreases an individual's ability to be productive (eg, as measured by rates of absenteeism and presenteeism and their related monetary costs based on lost wages). 19 In this study, the 2012 median weekly income figures were obtained for full-time (US$768 per week) and part-time workers ($233 per week) in the US from the Bureau of Labor Statistics, and for self-employed workers ($538 per week) from estimates published on the Internet by a private labor market research firm. 20,21 For each respondent, an hourly rate was estimated by dividing the median weekly income by 40 hours (the typical work week) for full-time and selfemployed workers or 20 hours for part-time workers. Next, the number of hours missed in the last week because of one's health (absenteeism) and the number of hours missed in the last week because of health restriction while at work (presenteeism) were each multiplied by the hourly rates then multiplied by four (average number of work weeks in a month) to obtain monthly total lost wage estimates. Indirect costs In general, costs due to informal caregiving were considered indirect costs in this study. Total indirect costs were calculated by including costs associated with lost work productivity, personal travel time to visit the stroke survivor, and any out-of-pocket caregiving-related expenditures. Travel time costs were calculated using $23.90 per person-hour estimate for nonbusiness travel by surface mode of transport. 22 The sum of out-of-pocket expenses borne by caregivers for additional expenses such as medical care (eg, purchase of grab bars and/or shower chairs), adult day care/respite care, food delivery, caregiver support services, etc, was calculated. All cost estimates are reported in 2012 US dollars. Statistical analyses General caregiver and stroke survivor characteristics, levels of caregiver burden, work productivity, and activity limitation were calculated using standard descriptive statistics. DAS cut-off scores were used to create three categories of PSS disability: mild (total DAS score 4), moderate (score, 5-8), and severe (score, [9][10][11][12]. The relationship between PSS disability scores and caregiver burden was explored using analysis of variance. Comparisons of qualityof-life scores against population norms were conducted with one-sample t-tests. P0.05 was used to indicate statistical significance across all analyses. The marginal effects of various caregiver and patient characteristics on absenteeism, presenteeism, and overall work restriction were estimated using generalized linear models with a logit link function and binomial distribution family (commonly referred to as "fractional logit" models). These models are well suited to handle dependent variables that are bounded within the 0-1 interval (as seen for work productivity measures evaluated in this study) along with many observed zeroes and/or one. 23,24 The model predicted absenteeism, presenteeism, and overall work restriction, which were computed at mean values of the explanatory variables. Of the top five symptoms that caregivers experienced in the past month, 64% reported headache, 63% had sleep difficulties, 49% had anxiety/nervousness, 29% had lightheadedness, and 29% had stomach pain. Caregiving duties were reported as the cause for these aforementioned symptoms in approximately 46% of those who reported headache, 42% of those who reported sleep difficulties, 28% of those who reported anxiety/nervousness, 71% of those who reported lightheadedness, and 53% of those who reported stomach pain. High blood pressure was the most frequent diagnosis, occurring in 42% of caregivers; 41% had high cholesterol and 22% had depression. Caregiving duties were reported as the cause among approximately 31% of those who reported having high blood pressure, 10% of those who reported having high cholesterol, and 50% of those who reported having depression. MCS and PCS quality-of-life scores from the SF-12v2 were significantly below the US population norm of 50 (mean MCS =42.38, P0.01; mean PCS =47.74, P=0.01; Table 1). stroke survivor characteristics Of the 153 stroke survivors with spasticity, 57% were female, 87% were white, the mean age was 75.0 years, and 26% were reported as being disabled ( The caregiver burden was greater for perceived time spent than for difficulty, according to mean OCBS time (46.1) and difficulty (32.4) subscale scores. At least onethird of caregivers reported having spent a moderate to a great deal of time (3-5 on the time subscale) assisting with nursing, personal care, walking, and transfer (eg, from bed to a chair) tasks. More than two-thirds spent a moderate to a great deal of time providing emotional support, monitoring the stroke survivor's progress, talking to health care professionals regarding the stroke survivor's condition and treatment plan, providing transportation, helping with additional tasks at home and outside the home, and managing the stroke survivor's finances and medical bills. Both the time and difficulty subscale scores increased significantly with poststroke survivor's disability level. The mean time subscale scores were 39.3, 45.3, and 49.3 and difficulty subscale scores were 25, 32.5, and 35.6 for mild, moderate, and severe disability, respectively (P0.05 for pairwise group comparisons). Additionally, caregiver burden as measured by mean BCOS score was 49.4 (SD =13.1). Work productivity and activity limitation Among employed caregivers (n=71), mean absenteeism was 9% (SD =15%) and presenteeism was 27% (SD =26%), leading to an overall caregiving-related work restriction of 32% (SD =29%). The mean number of work hours lost per week because of restricted productivity was 8.8 (SD =9.3) hours. The mean activity limitation caused by caregiving responsibilities for both employed and unemployed caregivers was 40% (SD =28%; Table 3). Among caregiver characteristics, age and number of children 18 years of age significantly predicted absenteeism and overall work restriction. The income levels of caregivers significantly affected absenteeism (income $25,000, P=0.02; income $25,000-$49,999, P=0.041, versus those with income $75,000), but did not have a statistically significant effect on overall work restriction. Caregivers, working part-time or self-employed, reported higher productivity losses than those working full-time, although such effects were not statistically significant (Table 4). Characteristics Stroke survivors with spasticity (N=153) Mean (sD) DAs hygiene 1.8 (1.0) Pain 1.8 (0.9) Dressing 1.9 (1.0) limb posture 2.0 (0.9) Notes: a Individual/family insurance plans are purchased directly by the stroke survivor or a family member; TrICAre is the health care program for uniformed service members (ie, active, guard/reserve, retired) and their families. Abbreviations: DAs, Disability Assessment scale (score range, 0-12); sD, standard deviation. The stroke survivor's disability level and lack of nursing home coverage had significant associations with the caregiver's work productivity. The predicted absenteeism, presenteeism, and overall work restriction at mean disability level (DAS score =7.0) were 15%, 30%, and 38%, respectively, which was significantly greater than the disability level at DAS score =6.0 ( Figure 1). Lack of nursing home coverage on the stroke survivor's health plan was associated with a 32% increase in the caregiver's work restriction (P0.001). Also, a lack of other caregivers within the family had a significant impact on work productivity with regards to absenteeism (a 10% increase in work hours lost as compared with having an additional family caregiver) but not for presenteeism or overall work restriction. human capital method Using the HCM to monetize these caregiving-related work productivity figures on a per-caregiver basis, lost productivity due to absenteeism was $269 (SD =$691) per month, due to presenteeism was $598 (SD =$670) per month, and total lost productivity was $835 (SD =$1,074) per month among working caregivers (Figure 2). Annually, total lost productivity costs were $10,000 for each employed caregiver, presenteeism contributing to 72% of these costs. Indirect costs In addition to work-productivity costs, other significant indirect costs were incurred by the caregivers (eg, costs of travel for executing caregiving responsibilities, out-of-pocket expenditures for the stroke survivor's health care or rehabilitation costs). Forty-one percent of PSS caregivers reported that they travel at least once weekly to visit the stroke survivor. On average, PSS caregivers spent 2.5 hours (SD =10 hours) a week in travel to visit the stroke survivor, incurring $242 (SD =$922) per month for travel. Also PSS caregivers spent, on average, $231 (SD =$655) monthly for various caregiving-related expenses. In total, personal travel time and out-of-pocket expenditures resulted in opportunity costs of $5,669 per caregiver per year. Discussion Spasticity is a debilitating complication of stroke, leading to increased activity limitations and participation restrictions. 7 Very few studies have examined the impact of PSS on the caregiver, including burden and indirect costs associated with caregiving responsibilities. 7 These results document the high burden imparted by caregiving for stroke survivors with PSS. This study found that caregivers spent a great amount of time helping stroke survivors with PSS with medical care, Table 4 Average marginal effects of caregiver and patient characteristics on productivity loss (proportion of work hours missed, presenteeism, and proportion of overall work restriction) among employed caregivers Characteristics Absenteeism Presenteeism Overall work restriction 1799 Poststroke spasticity caregiver burden performing various physical tasks, and providing emotional support. Most tasks were perceived to be of mild-to-moderate difficulty by caregivers, although the amount of time spent performing these tasks was perceived as more burdensome. The demographic characteristics of patients with PSS and their caregivers are not well characterized in the literature. To our knowledge, this is the first study to describe caregiver characteristics and quantify the economic burden of caregiving for patients with PSS. Therefore, the survey sample in this study cannot be compared with caregiver characteristics in the general PSS population. Both the NHWS Internet sample and the Lightspeed Research Internet panels, from which the study sample was derived, are representative of the US population. Based on the characteristics of stroke survivors (regardless of spasticity) reported in nationally representative US survey studies, it may be said that African-Americans could be underrepresented in our sample. 3,25 The mean (SD) BCOS score in this study was 49.36 (12.84), which suggests greater burden for caregivers of stroke survivors with spasticity than the average burden reported in a study of 147 caregivers for stroke survivors who were 4 months poststroke with mild cognitive or language impairment, with or without spasticity (BCOS score, 58.4 [10.8]), 16 and a study of 21 caregivers for patients with heart failure, with the majority of patients being New York Heart Association Class II and III suggesting moderate impairment (BCOS [12.5]). 26 However, direct comparison with these populations was not possible in the current analysis. Caregivers experienced a high cost in productivity as measured by absenteeism, presenteeism, work productivity, and activity limitation. When monetized, the associated costs of total lost productivity were substantial, totaling $10,000/ year for each employed caregiver, with presenteeism contributing to 72% of these costs. Additionally, the number of work hours lost by PSS caregivers because of presenteeism was nearly four times that of the national average estimate (2.3 hours per week). 27 The impact of PSS disability on productivity loss was found to be linear over a total DAS score range of 0-8, beyond which productivity losses were inelastic with further increase in disability, potentially because stroke survivors with higher PSS disability are more likely to avail themselves of paid caregiving or nursing home services as compared with stroke survivors with mild-to-moderate disability. The monetary costs associated with caregiver loss of productivity were estimated using the median national wage, which likely provides a conservative estimate since the average caregiver age in this study was 51.6 years, an age likely associated with higher income than the national average. By comparison, the indirect costs (calculated using the same method) in the 2006-2008 NHWS were $9,599 for patients with type 2 diabetes plus painful peripheral diabetic neuropathy and $7,544 for patients with type 2 diabetes without neuropathy. 28 Additionally, this study also suggests that opportunity costs due to caregiving tasks (eg, personal travel time, out-of-pocket expenditures) may be substantial ($5,669 per caregiver per year). In summary, the total indirect costs of PSS, which included caregiver burden, productivity losses, and opportunity costs, are considerable; therefore, the overall economic burden of PSS may be underestimated if indirect cost estimates are not considered. This study also found that caregiver burden increased with the level of the stroke survivor's disability. Similar findings were reported in a post hoc analysis of an open-label study of burden among caregivers of patients with upper limb PSS, which found that caregiver burden, measured by the number of hours of caregiver assistance per week, was significantly associated with increased disability in the areas of hygiene and dressing. 8 Although this study lacks a direct comparison with stroke survivors without spasticity, an indirect comparison with the results for a general poststroke population observed in another study showed a moderate increase in the time component of OCBS among stroke survivors with spasticity. 15 This study has several limitations. First, the main focus of this study was productivity loss among employed caregivers, who constituted only 40% of the PSS caregiver population. Compared with employed caregivers, retired caregivers spent an average of 80% more time caregiving. This suggests that the overall burden may be even greater in this population, which has societal implications despite that the economic burden may be less compared to employed caregivers. Another study limitation can be attributed to the way in which the survey questions were asked. For example, lost productivity figures were assumed to be attributed solely to caregiving responsibilities; however, respondents may have attributed some non-caregiving-related productivity loss to caregiving responsibilities, thus inflating the perceived burden. Also, this study relied on the HCM to evaluate indirect costs, which estimates potential rather than true costs of lost productivity. In other words, the HCM does not account for the replacement of workers due to long-term absence, since adjustments would be difficult to make without job-specific data (eg, average period of vacancy within a particular profession). Alternative approaches such as the friction costs method or Washington Panel Approach (WPA) are often used to account for these types of long-term changes in employment. 19 Although this study did not directly integrate productivity and quality-of-life measures (as compared to the WPA), both productivity (ie, WPAI) and quality-of-life (SF-12v2) measures provided parallel evidence of the burden associated with caregiving. Finally, this is a descriptive study without a control arm; therefore, no comparison is made with caregivers of stroke survivors without PSS, which is a major limitation of this study. Although this study provides additional evidence for the burden associated with PSS, there are some studies that acknowledge that spasticity contributes to severe disability, including motor and activity limitations, however suggest that stroke with spasticity does not necessarily hold greater clinical relevance than stroke without spasticity. 29 Hence, more research on indirect costs due to PSS is warranted. In summary, the results of this study highlight the substantial burden of PSS caregiving responsibilities. Easing the burden for these caregivers may have a considerable societal impact, from a humanistic and economic standpoint. Employed caregivers should be encouraged to consider Family Medical Leave Act provisions for unpaid leave, and seek professional help to bolster retirement savings and investments. Also, employers should be encouraged to offer well-designed and flexible work option plans that may improve the work productivity of employed caregivers of PSS survivors.
2018-05-08T18:40:05.518Z
0001-01-01T00:00:00.000
{ "year": 2015, "sha1": "4f6f01db9183e20f48aecd0774fbc041398a33e5", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=27915", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4f6f01db9183e20f48aecd0774fbc041398a33e5", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
3363428
pes2o/s2orc
v3-fos-license
Nordic Walking Training Causes a Decrease in Blood Cholesterol in Elderly Women Supplemented with Vitamin D Objective Different studies have demonstrated that regular exercise can induce changes in the lipid profile, but results remain inconclusive. Available data suggest that correction of vitamin D deficiency can improve the lipid profile. In this study, we have hypothesized that Nordic Walking training will improve lipid profile in elderly women supplemented with vitamin D. Methods A total of 109 elderly women (68 ± 5.12 years old) took part in the study. First group [experimental group (EG): 35 women] underwent 12 weeks of Nordic Walking (NW) training combined with vitamin D supplementation (4,000 IU/day), second group [supplementation group (SG): 48 women] was only supplemented with vitamin D (4,000 IU/day), and third group [control group (CG): 31 women] was not subject to any interventions. Blood analysis of total cholesterol (TC), triglycerides (TG), high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), and 25-OH-D3 was performed at baseline and after the 12 weeks of NW training. Additionally, a battery of field tests specifically developed for older adults was used to assess the components of functional fitness. The same blood analysis was repeated for the EG 6 months after the main experiment. Results After 12 weeks of NW training and vitamin D supplementation, in the EG a decrease in TC, LDL-C, and TG was observed. In the SG, no changes in the lipid profile were observed, whereas in the CG an increase in the HDL-C level was noticed. Positive physical fitness changes were only observed in the EG. Conclusion Our obtained data confirmed baseline assumption that regular exercise induces positive alternations in lipid profile in elderly women supported by supplementation of vitamin D. Methods: A total of 109 elderly women (68 ± 5.12 years old) took part in the study. First group [experimental group (EG): 35 women] underwent 12 weeks of Nordic Walking (NW) training combined with vitamin D supplementation (4,000 IU/day), second group [supplementation group (SG): 48 women] was only supplemented with vitamin D (4,000 IU/day), and third group [control group (CG): 31 women] was not subject to any interventions. Blood analysis of total cholesterol (TC), triglycerides (TG), high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), and 25-OH-D3 was performed at baseline and after the 12 weeks of NW training. Additionally, a battery of field tests specifically developed for older adults was used to assess the components of functional fitness. The same blood analysis was repeated for the EG 6 months after the main experiment. results: After 12 weeks of NW training and vitamin D supplementation, in the EG a decrease in TC, LDL-C, and TG was observed. In the SG, no changes in the lipid profile were observed, whereas in the CG an increase in the HDL-C level was noticed. Positive physical fitness changes were only observed in the EG. conclusion: Our obtained data confirmed baseline assumption that regular exercise induces positive alternations in lipid profile in elderly women supported by supplementation of vitamin D. Keywords inTrODUcTiOn Regular exercise has been demonstrated to induce several adaptive changes manifested by an increase in the endurance strength of skeletal muscle. Positive changes in brain structure and function have also been observed in response to physical activity (1). These and other adaptive changes induced by exercise are known to lower the risk of cardiovascular disease, diabetes, cancer, depression, and many others (2), often associated with aging. The pro-healthy effect of physical activity on the risk of these diseases may be partially attributed to beneficial changes in insulin sensitivity, inflammatory markers, and blood lipids post (3,4). However, the topic continues to raise questions; despite the fact that many studies have reported a positive effect of regular exercise on blood lipids, several studies have shown no effect at all (5). Vitamin D is an endogenous hormone known to regulate expression of hundreds of genes. Since it is synthetized from 7-dehydrocholesyterol, it is also possible that its status is interrelated with cholesterol. There is some evidence that apolipoprotein A-I (apo A-I) gene expression can be modified by vitamin D. At the same time, apo A-I is an essential component of high-density lipoprotein (HDL) molecules, positively influencing its quality. The effects of exercise on blood lipids have been shown to depend on applied dietary solutions or some drug compounds (6). In particular, vitamin D combined with exercise has been demonstrated to modify lipid metabolism and blood lipid profile by improving insulin sensitivity (7). Physical activity itself is associated with better vitamin D status (25-OH-D3) (8); however, the effect of vitamin D status on blood lipids alone remains unclear. Supplementation of vitamin D at 400 IU for 5 years has been reported to induce no significant changes in blood lipids (9). Conversely, another study has shown that the concentration of 25-OH-D3 correlated inversely with triglycerides (TG) and total cholesterol (TC) (10). Based on collected data, we have hypothesized that beneficial effects of exercise on the lipid profile may be influenced by the vitamin D status. Vitamin D deficiency or insufficiency is prevalent in most countries; it is now considered a pandemic (11,12). Consequently, in this paper, we have studied effects of vitamin D supplementation alone and combined with 12 weeks of Nordic Walking (NW) training on the lipid profile in elderly women. The present study is the first published report to implicate effects of exercise and vitamin D on lipid profile in elderly subjects. MaTerials anD MeThODs Three groups of elderly women participated in the study, all aged over 60 years (68.4 ± 5.0 years old). They were randomly assigned to three groups. First group [experimental group (EG)] involved 35 women subjected to 12 weeks of NW training supported with vitamin D supplementation (average 4,000 IU/day). Second group [supplementation group (SG)] involved 48 women, subject only to vitamin D supplementation (average 4,000 IU/day). Vitamin D was supplemented three times per week with appropriate doses to reach 28,000 IU/week which is essentially concordant with current recommendations (13). Third group [control group (CG)] involved 31 women, who did not receive any supplementation and did not participate in the training. Considering the pleiotropic function of vitamin D on many aspects of human health, we recognized that it would be unethical to include a placebo group in the experiment. All subjects underwent a medical check-up prior to the experiment. Exclusion criteria from the study included the following: uncontrolled hypertension (systolic blood pressure over 140 mmHg and diastolic over 100 mmHg) a history of cardiac arrhythmia, cardio-respiratory disorders, and orthopedic problems. It was recommended that the volunteers did not change their lifestyle and diet habits throughout the study. Experiment activities were completed at the Gdansk University of Physical Education and Sport. ethics statement The examination was officially approved by the Bioethical Committee of the Regional Medical Society in Gdansk (KB-26/14) according to the Declaration of Helsinki and was registered as a Clinical trail NCT03417700. Before commencing the training and testing, subjects received verbal description of the experiment. Written informed consent was signed by all participants. The ethics approval was also obtained for referring participants to their family physician upon detection of any abnormal pathology results during the medical check-up. Blood analysis Blood collection was performed following the same timeline for all groups: at baseline and one day directly after the 12-week training program. Additionally, in EG blood collection took place also 6 months after end of NW training program. Blood samples were obtained between 7 and 8 a.m. after an overnight fast. The serum was separated by centrifugation at 1,000× g for 15 min and stored at −80°C pending analysis. Red blood cells count (10 6 /μL) (RBC), hematocrit (%) (Hct), and blood hemoglobin concentration (g/dL) (Hb), low-density lipoprotein cholesterol level (LDL-C), HDL-C, TC, and triglyceride (TG) were determined from venous blood samples by conventional methods using a BIOSYSTEMS S.A, ANALYZER A25 Costa Brava, Barcelona, Spain. Vitamin D assessment Vitamin D metabolite 25-OH-D3 was measured by highperformance liquid chromatography mass spectrometry (HPLC-MS). The HPLC system was a Transcend TLX turboflow 2 system attached to a TSQ Quantum Ultra triple quadrupole mass spectrometer (Thermo Fisher Scientific, San Jose, CA, USA) as described before (14). Measurements of Physical Fitness A battery of field tests developed specifically for older adults was used to assess components of functional fitness in the EG. These tests require very little time or equipment and are designed to be conducted in community settings. In accordance with Rikli and Jones (15,16), we used the following tests at the beginning of chair sit-&-reach (cm) 7 ± 9 9 ± 9 6 ± 9 4 ± 10 4 ± 10 6 ± 10 4 ± 10 Back scratch (cm) exercise Protocol The same group of research assistants and instructors supervised all training sessions. The EG completed 12 weeks of a mesocycle exercise, divided into three microcycles. The training procedure was described in detail in our previous study (17). The participants met three times a week, 1 h after eating a light breakfast and performed the main session of NW training at 60-70% intensity of the maximal HR (10-min warm-up, 45-55-min NW, and 10-min cooldown resUlTs general Outcomes Baseline descriptive characteristics of participants are summarized in Table 1. Values of the body mass index (BMI), percentage, and absolute fat tissue indicate that our groups were within the range of normal to slightly overweight. There were no significant changes in the body composition after the 12 weeks of NW training in the EG nor in the SG or CG. level of Physical Fitness The 12 weeks of NW training improved all measured fitness parameters. Specifically, changes in the level of general shoulder coordination and flexibility were statistically significant ( Table 1). The applied training program also improved the level of endurance and lowered the heart rate at baseline [from 81 ± 14 to 77 ± 15 bpm, average values during exercise (120 ± 17 to 116 ± 16 bmp) and after the exercise (141 ± 22 to 133 ± 23 bmp)]. These changes, however, were not statistically significant. The level of physical fitness in the SG and CG was lower as compared with the EG; however, the differences did not reach statistical significance ( Table 1). In addition, some improvement in endurance has been observed in 2,000-m test (1,082 ± 108 vs. 1,059 ± 116 s); however, it did not reach statistical significance (p = 0.2, CI − 49 to 11). general characteristics of Blood Tests Hematological parameters of the EG were within reference ranges in all subjects at baseline as well as after the training. Nonetheless, a significant drop in Hb, mean corpuscular hemoglobin concentration (MCHC), and mean corpuscular hemoglobin (MCH) was observed after training ( Table 2). Importantly, no iron deficiency (not shown) or anemia was observed in any subject. lipid Profile Vitamin D supplementation combined with the 12 weeks of NW training induced significant changes in the lipid profile in the EG. A significant decrease in TC, LDL-C, and TG was noted. All these changes were accompanied by a significant rise in 25-OH-D3 concentration owing to the applied supplementation. The training and supplementation caused a decrease in HDL-C; however, the shift was not statistically significant ( Table 3). A detailed analysis of the ratios between parameters of the lipid profile (LDL-C/ HDL-C, TC/HDL-C, TC/LDL-C, TG/HDL-C) showed no additional tendency for change, neither within individual groups nor in the intergroup comparison. Interestingly, 6 months following the experiment, the levels of TC, HDL-C, and LDL-C returned to baseline in the EG ( Table 3). In the CG, the concentration of 25-OH-D3 remained stable over the 12-week period, considerably lower compared with the EG and SG. An increase in HDL concentration was the only change observed in the CG ( Table 4). At the same time, in the SG, an increase in 25-OH-D3 was accompanied by a decrease in HDL-C relative to the CG ( Table 4). DiscUssiOn Data obtained through this study suggest that regular exercise induced positive changes in the lipid profile in elderly women. We demonstrate that NW training combined with vitamin D supplementation led to a significant decrease in total blood cholesterol in elderly women. As reviewed by Leon and Schantz, many studies have demonstrated that exercise applied alone induced a decrease in LDL and TG, but had no effect on blood TC (18). Conversely, a 12-week program of aerobic exercise did not influence blood lipids despite triggering a decrease in TG and a transient increase in HDL (19). It has also been revealed that the lipid-profile concentrations: serum TG, TC HDL-C, and LDL-C did not differ between athletes and non-athletes (20). In our previous study, we have shown that 4 weeks of regular training in young rowers did not influence the lipid profile unless the subjects were supplemented with vitamin D (21). All of these data indicate that effects of exercise on blood lipids can be modulated by other factors, among many vitamin D. Our preliminary data on 25-OH-D3 demonstrate that most of the women participating in the study had exhibited vitamin D deficiency or insufficiency. Thus, in our present study, we have investigated the effect of regular training in combination with vitamin D supplementation. The present study also demonstrates that NW training combined with vitamin D supplementation induced not only changes in total blood cholesterol, but also a significant decrease in LDL-C, yet no shift in HDL-C. Still, a comparison between the SG and the CG shows that vitamin D has a tendency to lower HDL-C. A previously published study supports this observation, as a low dose of vitamin D supplementation (300 IU/day) was shown to lead to a significant drop in HDL-C in postmenopausal women (22). In addition, no differences in the lipid profile between physically active (>3 h exercise/week) and physically inactive (<3 h exercise/week) women were observed. The authors concluded that vitamin D supplementation may have an unfavorable effect on lipids in postmenopausal women undergoing hormone replacement therapy (22). It is difficult to agree with this conclusion as a study published in recent years demonstrated that it is the quality rather than quantity of HDL that plays an important role in human health (23)(24)(25). The level of HDL in people converted to low-fat high-carbohydrate diet was observed to decrease, while the atheroprotective potential improved (26). Another study has shown the level of HDL-C to decline in bariatric patients after surgical intervention, reaching the preoperative level after 6 months. It has been demonstrated that that during this period a qualitative switch took place as apoE HDL was replaced by apoA-I HDL (25). The effect of vitamin D on the apoA-I gene expression is debatable because both positive and negative shifts have been reported in response to 1,25-OH-D3 treatment. These data indicate that the effects of vitamin D on blood lipids can be modulated by other factors. At the same time, in our previously published study we have demonstrated that regular exercise had no effect on the HDL-C level in vitamin-D-deficient young men, but led to its significant reduction when accompanied by vitamin D supplementation (21). It is generally believed that changes are blood lipids result from the adaptive response of skeletal muscle, manifested in an increase in lipid oxidation, and insulin sensitivity (27). We observed some improvement in endurance in the EG, which could have been accompanied by an increase in the mitochondrial oxidative potential. Contrary to our expectation, changes in endurance capacity were accompanied with significant decrease in Hb and MCH; however, all the recorded data were in reference range. More research is needed to understand the nature of these changes. Our data suggest that the shifts in blood lipids induced by NW training were mediated by vitamin D and some factors possibly discharged from exercising muscles (4). Interestingly, in the EG, in the 6-month period after the intervention, during which neither regular training nor supplementation were taking place, all lipids parameters and 25-OH-D3 returned to baseline values. This happened despite the fact that all participants had attended a talk about benefits of vitamin D and had received recommendations about vitamin D supplementation and exercise. Certainly, however, it is too early to judge the nature of these changes without data about subpopulation of HDL particles. The NW training and vitamin D supplementation had also a positive effect on blood TG. These data are in agreement with a previously published study, where plasma 25-OH-D was inversely associated with TG and TC (10,28). Certainly, the main limitation of this study is the lack of a placebo group. As mentioned above, we decided not to include a placebo group in our study because of two main reasons. Firstly, most women taking part in our study were vitamin D deficient at the beginning of the experiment. Given that vitamin D deficiency can increase the risk of many morbidities, we decided that maintaining this state would be unethical. Secondly, observations made in this study are supported by our earlier research on young athletes (21). We can, thus, conclude that regular NW exercise induced positive changes in blood lipids in elderly women, in whom vitamin D deficiency was corrected. eThics sTaTeMenT The examination was officially approved by the Bioethical Committee of the Regional Medical Society in Gdansk (KB-26/14) according to the Declaration of Helsinki. aUThOr cOnTriBUTiOns KrP and JA designed the study and performed the research. JK, JA and EZ performed the research and wrote the paper. KaP, JM, JJ, WS, and ML performed the research.
2018-02-20T14:02:45.311Z
2018-02-20T00:00:00.000
{ "year": 2018, "sha1": "921d09435c369226e514edb259b5959b6157b6a4", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2018.00042/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "921d09435c369226e514edb259b5959b6157b6a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
217200855
pes2o/s2orc
v3-fos-license
The Contributon of Students’ Motivation and Sentence Stucture Toward Writing Skill This research aimed to find out the correlation between: (1) students‟ motivation and writing skill; (2) sentence structure and writing skill; and (3) students‟ motivation and sentence structures simultaneously to writing skill. This study used a correlational method with cluster random sample of 39 students of the eleventh grade of SMK N in Surakarta. The researcher used questionnaire, objective tests and essay test as the instruments to collect the data. Single and Multiple Linear Regression and Correlation were used to analyze the data. The research findings show that: (1) there is positive correlation between students‟ motivation and writing skill, in which students‟ motivation brings 17.63% effective contribution to writing skill; (2) there is positive correlation between sentence structure and writing skill, in which sentence structure brings 57.05% effective contribution to writing skill; and (3 ) there is positive correlation between students‟ motivation and sentence structures simultaneously to writing skill, in which students‟ motivation and sentence structure simultaneously bring 74.68 % contribution to writing skill. INTRODUCTION account the sometimes extended periods of Writing has become a basic activity thinking that precede creating an initial for students in vocational school. They draft". Petty and Jansen (1980:362) say have to write paper, article, book report, that writing is the mental and psychical act and other kinds of writing. Writing skill of forming letters and words. It means that has still to be mastered for students even writing is process of expressing feeling of they had graduated from school because thinking to arrange word into a sentence, writing skill is needed in their sentence into paragraph and paragraph into environment. They have to write a convention of written from. Curriculum Vitae (CV) for application Boice (in Murray and Moore, letter to apply their job. Moreover, if they 2006:7) states that, writing is not just worked in some places, writing skill is also influenced by what we know and what we needed to write report, article or to write have discovered about a particular any other practical tasks. phenomenon, it is also influenced by what According to Harris (1993:10), we feel, and more particularly, what we "writing is a process that occurs over feel about ourselves. Writing is influenced period of time, particularly if we take into by many factors including what we discover, feel and think about our environment or ourselves. Based on explanation above, it can be concluded that writing is process of forming meaningful sequence from symbols into a written form which has certain language meaning. The students" writing skill can be contributed by many factors such as teacher, students, curriculum, teaching methods, class condition and many other factors. Those factors can be divided into two categories, linguistic factor and nonlinguistic factor. Non-linguistic factors include motivation, attitude, class condition, teaching method and curriculum. Linguistic factors include vocabulary, sentence structure and grammar. Motivation is one of non-linguistic factors that contribute to students" learning writing. According to Brophy (1998:3), motivation is a theoretical construct used to explain the initiation, direction, intensity, and persistence of behavior, especially goal-directed behavior. Motivation in a classroom context is used to explain the degree of which students invest their attention and effort in teaching and learning process. Elliot, Kratochwill, Cook and Travers (2000:332) state, "Motivation is defined as an internal state that arouses us to action, pushes us in particular directions, and keeps us engaged in certain activities". In addition, Sinclair (in Towndrow, Koh, and Soon, 2008: 37) states that motivation is what moves us to do something that involves energy and drive to learn, work effectively, and achieve potential. Motivation is a state of cognitive arousal which provokes a decision to act that can make people to achieve their goal (Wiliam and Burden, in Harmer, 2007:98). It means that the strength of motivation will depend on how much value of the individual places on the outcomes of their wishes to achieve their goal. According to Brown (2007:170) there are two types of motivation from different point of view. The first type is instrumental and integrative and the second type is intrinsic and extrinsic motivation. Brown (1994:153), states that instrumental motivation refers to motivation to acquire a language as a means for attaining instrumental goal. Besides the instrumental motivation, there is one another type of motivation that is integrative motivation. Brown (1994:154) says integrative motivation describes learner who wished to integrate themselves into the culture of the second language group and become a part of the society. From those theories, it can be seen that integrative motivation focus is more on language culture of society than instrumental motivation focuses on purpose of learning language. From those explanations the writer concludes that motivation is arousal, impulse, emotion or desire that drives people to moves into particular action in order to achieve their goal. In teaching and learning activity, the concept of students" motivation is used to explain the degree of students that have a high and low motivation. According to Sun (2010:1), the more motivation they may have, the more effort they tend to put into learning the language. From those theories, it shows that highly motivated students usually have high spirit to reach their goal, learn and practice more than low motivated students. The teacher must make students highly motivated to learn with telling some funny and motivated stories, variation of teaching and learning, or put game in every process of teaching and learning in class. The linguistic factor that can contribute to students" writing skill is sentence structure. The understanding of sentence structure will lead students to make effective sentences in expressing ideas and feelings. Widiarso (2006:15) says that the absence of sentence structure understanding will lead students to make grammatical mistakes in writing a text. There are some experts who define the term of sentence structure. According to Oshima & Hogue (2006: 164), sentence is a group of words used to communicate ideas. Sentence is the largest unit of grammatical organization within which parts of speech (e.g. nouns, verbs, and adverbs) and grammatical classes (e.g. word, phrase, and clause) are said to function (Richards & Schmidt, 2010: 522). In addition Ur, (1996:79) says that sentence is a set of words standing or their own as sense unit, its conclusion marked by a full stop, question mark or exclamation mark. It can be concluded that sentence is the group of words which stand as sense unit that involves in parts of speech and grammatical classes to communicate the ideas. People may get confused with the term of structure and grammar. According to Richards & Schmidt (2010:251), grammar is a description of the structure of a language and the way in which linguistic units such as words and phrases are combined to produce sentences in the language. But, Ur, (1996:79) says that a specific instance of grammar is usually called a structure. In this study, grammar refers to sentence structures that are limited to construct and organize sentences. Then, McCrimson (1963:402) says that sentence can be studied in the term of two kinds unit: form units, which show inflectional changes in word, and function units, which show how words are related in a sentence, chiefly into word order. Based on the statements above, it can be concluded that the term of sentence structure is linguistic rules which use to construct a correct form sentence. With the correct form of sentence, we can express our idea clearly From those explanations, the aims of the research is to know whether: (1) there is a contribution of students" motivation to writing skill of the eleventh grade students of SMK N in Surakarta; (2) there is a contribution of sentence structure to writing skill of the eleventh grade students of SMK N in Surakarta in the academic year 2013/2014; (3) there is a contribution of students" motivation, sentence structure simultaneously to writing skill of the eleventh grade students of SMK N in Surakarta. RESEARCH METHODS In conducting the research, the writer used the quantitative correlational method. According to Nunan (1992:3), quantitative research is obstructive and controlled, objective, generalisable, outcome oriented, and assume that existence of facts which are somehow external to and independent of the observer or researcher. In addition, Singh (2007:16) says that correlation is one of the most widely used measures of association between two or more variables. This research has two kinds of variables, predictor variable or independent variable and response variable or dependent variable. The predictor or the independent variables in this research are students" motivation (X 1 ) and sentence structure (X 2 ). The response variable or dependent variable is writing skill (Y). The research had 39 studentrespondents which were selected randomly from the eleventh grade of SMK N in Surakarta in the academic year 2013-2014. The tests were administered at SMK N in Surakarta over three days. Researcher used questionnaire to collect the students" motivation data and test to measure sentence structure and writing skill. The tests were the objective test in the form of multiple choices test for sentence structure and an essay test for writing skill. Before doing the tests, the researcher explained about the test instruction and the purpose of this research. In the first day of getting the data, the researcher was giving them the questionnaire about students" motivation. They take 45 minutes to fulfill the questionnaire. After fulfilling the questionnaire, the next day students were asked to do the sentence structure test for 60 minutes. In the third day, the researcher gave a test of writing for 60 minutes. RESEARCH FINDINGS AND DISCUSSIONS In this part of the research, the correlation between students" motivation and their writing skill, the correlation between sentence structure and their writing skill and the correlation between students" motivation and sentence structure simultaneously, toward writing skill were presented. The result showed that participant of students" motivation, sentence structure, and writing skill score are normally distributed. So, the tests of three variables in this research were valid and there were no diffraction especially in small sample. The result of regression is linear and significant. So, the rise and the fall of writing skill are followed linearly by the rise and the fall of students" motivation and sentence structure. The first finding came from the correlation between students" motivation and writing skill. Based on the computation of linearity and significance of regression test of students" motivation (X 1 ) and writing skill (Y), the result is linear and significant. The result shows that the linearity of students" motivation (X 1 ) and writing skill (Y), (F o ) is 1.7736. To know whether or not students" motivation (X 1 ) and writing skill (Y), are in linear regression, F o must be compared to F When examining the simple correlations associated with the regression, the researcher noted that the performance of students" motivation was related with performance of writing skill (r x1y = 0.3303). Then, the value is compared to r table at significance level α = 0.05 for N = 39. It is found that r table is = 0.316. It can be seen that r x1y is greater than r table. It means that Ho is rejected and there is positive correlation between students" motivation (X 1 ) and writing skill (Y). The coefficient of determination (r 2 ) between students" motivation (X 1 ) and writing skill (Y) is 0.3303. It means that 33.03% variation of writing skill is contributed by students" motivation (X 1 ) and 66.97% is contributed by other factors. The second finding came from the correlation between sentence structure and writing skill. Based on the computation of linearity and significance of regression test the result shows that the linearity sentence structure (X 2 ) and writing skill (Y), (F o ) is 1.9936. To know whether or not sentence structure (X 2 ) and writing skill (Y) are in linear regression, F o must be compared to F The computation showed that the significance test of sentence structure (X 2 ) and writing skill (Y) α= 0.05 for, df 1 and 37 (F o ) is 73.973. It must be compared to F table. The table shows that F for α= 0.05, df 1 and 37 is (F t ) = 4.08. F o is greater than F t or F o (73.973) > F t (4.08). It means that the regression is significant. The result of correlation computation using Pearson Product Moment Formula of structure (X 2 ) and writing skill (Y) shows that the coefficient of correlation is (r x1y ) = 0.816. Then the value is compared to r table at significance level α = 0.05 for N = 39. It is found that r table is = 0.316. It can be seen that r x2y is greater than r table. It means that Ho is rejected and there is positive correlation between sentence structure (X 2 ) and writing skill (Y). The coefficient of determination (r 2 ) between sentence structures (X 2 ) and writing skill (Y) is 0.3303. It means that 66.66% variation of writing skill is 64 contributed by sentence structure (X2) and 33.34% is contributed by other factors. The third finding came from the correlation between students" motivation and sentence structure simultaneously to writing skill. The technique used here is multiple linear regressions .From the computation of multiple regressions, it was found that the coefficient of a 1 , a 2 , and a o are 0.0726, 0.674, and 45.383. Therefore, the multiple linear regression equitation of writing skill (Y) on students" motivation (X 1 ) and sentence structure (X 2 ) becomes: Y = 45.38 + 0.726 X 1 + 0.674 X 2. The testing of significance of the correlation coefficient result for F o is 53.105. This value is compared to F table at 5 % significant level degree of freedom (df) 2 : 36 is 3.23. It is obvious that F o is greater than F table (53.105 > 3.23). It means that F o is significant. So, it can be concluded that the regression equation is also significant. From the multiple linear regression analysis of students" motivation (X 1 ), and sentence structure (X 2 ) with writing skill (Y), the writer finds that the coefficient of correlation (R y12 ) is 0.86. This value then is tested by using significant test which the result in F o = 53.105 which is greater than F table 3.23 with degree of freedom (df) 2: 36 and at the level of significance α = 0.05. It means that R is significant. It can also be meant that Ho is rejected and Ha is accepted. So, it can be concluded that there is a positive correlation between students" motivation (X 1 ) and sentence structure (X 2 ) simultaneously and writing skill (Y). The computation of regression is aimed to predict the correlation between each variable and to identify which one of the two variables that has more contribution than the other variable. Based on the computation it is obtained that the contribution of students" motivation (X 1 ) is 23.61 % as the relative contribution and 17.63 % as the effective contribution. While the contribution of sentence structure (X 2 ) is 76.39 % as the relative contribution and 57.05 % as the effective contribution. DISCUSSION For the first aim, the researcher was interested in determining whether there is a correlation between students" motivation to writing skill. Based on the finding, students" motivation gives contribution to students" writing skill. The result confirmed that there is a positive significant correlation (t o = 4.27194 > t t = 1.70) between students motivation and writing skill. In teaching and learning activity, the concept of students" motivation is used to explain the degree of students that have a high and low motivation. Students who have high motivation are willing to learn more than others. Those are in line with Sun (2010:1), who says the more motivation they may have, the more effort they tend to put into learning the language. It makes students" motivation has important rules in developing students ability in learning writing. Then, Ghavania et all (2011Ghavania et all ( : 1116, et al say that the learning motivation aspect can motivate students to involve and enhance learning effectiveness. It shows that motivation is included as one factor that contributes to students" writing skill. Motivation can increase the student ability in learning writing because motivation can make student desire and have enthusiasm to learn about anything. Based on the computation, students" motivation gives contribution to writing skills about 33.03%. The second aim was finding out the correlation between sentence structure and writing skill. Sentence structure is one of linguistic factors that can contribute to students" writing skill. The understanding of sentence structure will lead students to make effective sentences in expressing ideas and feelings. It makes sentence structure become important component of writing. If students can make a good arrangement of sentence structure they will make a good writing too. Those are in line with Widiarso (2006:15), who says that the absence of sentence structure understanding will lead students to make grammatical mistakes in writing a text. Based on the computation, it showed that 66.66% variance of writing skill was determined by sentence structure. The last result said that there is a positive correlation between students" motivation and sentence structure simultaneously to writing skill. According to Boice (in Murray and Moore, 2006:7), writing is not just influenced by what we know and what we have discovered about a particular phenomenon, it is also influenced by what we feel, and more particularly, what we feel about ourselves. Then, Nunan (1989:36) says that, the aspect of writing is content, format, sentence structure, vocabulary, punctuation, spelling and letter formation. With mastering sentence structure will help students to have awareness to construct a good writing. The students" awareness can make a big impact in process of creating good writing. They will have more attention in detail of writing from word structure to sentence structure until the coherence and mechanic of the paragraph of their writing. The impulse of students" motivation makes them willing to learn and explore more specifically about writing. If they were motivated, they will ask anything about writing in their teacher. They will be curious how about to construct a good writing, and how they can learn to do it. Based on the data gathered, it can be stated that both students" motivation and sentence structure appeared to be significant predictors for their writing skill. CONCLUSION AND SUGGESTION The data analysis shows that there is a positive correlation between students" motivation and writing skill, there is a positive correlation between sentence structure and writing skill, and there is a positive correlation between students" motivation and sentence structure simultaneously, and writing skill of the eleventh students of one of SMK N in Surakarta. Judging by quantity, the sentence structure brings more contribution (66.66% to writing skill than the contribution of students" motivation to writing skill (33.03%). The students" motivation and sentence structure simultaneously bring the highest contribution to writing skill (74.68%). Therefore, the two variables cannot be ignored in the effort to improve the student"s writing skill. It is suggested for the teachers, in order to increase students" writing ability, they teacher should give them motivation to learn. The teacher can tell motivating stories to make students have high motivation to learn writing. Moreover, to make students aware about their writing mistakes that they usually make in writing, the teacher should give easy task with many grammatical errors and ask students to identify the errors. For students, they must have high motivation to learn something because if they have high motivation, they will be encouraged themselves to learn about anything that they want. Moreover, students must have awareness about their grammatical or structure errors when they made in the process of writing. With their awareness, they can more concern about their mistake and error so they correct it easily to make the good one of writing.
2019-12-05T09:08:01.038Z
2017-09-29T00:00:00.000
{ "year": 2017, "sha1": "6ce107386616de47f622223714ba6ad26454c78d", "oa_license": null, "oa_url": "https://jurnal.uns.ac.id/englishedu/article/download/35931/23374", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7f9654d2a30a3dcc26322db87b1ec13df65e7617", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
260348523
pes2o/s2orc
v3-fos-license
Anaplastic lymphoma associated with breast implants—Early diagnosis and treatment Abstract Anaplastic large cell lymphoma associated with breast implants is a relatively new disease that deserves attention from the academic community. Brazil figures as one of the protagonists in plastic surgery, however publications are insufficient and very few cases are reported in comparison to other countries. It is a disease with excellent prognosis when diagnosed early and treated effectively, but for this to happen, it is essential that health care professionals and the patient understand its pathology. We reported two cases in a small town during a short period of time. In both cases reported by this study, the patients presented late seroma, associated with pain as a clinical presentation, at 13 and 9 years after the placement of silicone implants with textured polyurethane surfaces. After the procedure, the patients were screened for cancer. Further research with more robust samples is still needed to fully determine the risks and benefits of using textured versus smooth implants. | INTRODUCTION Breast augmentation is the most commonly performed cosmetic surgery in the world with around 1.9 million procedures each year. 1 It is estimated that 35 million women worldwide have already had this procedure. 2 Studies have shown an increased risk of anaplastic lymphoma in patients with breast implants. 3,4 Anaplastic large cell lymphoma associated with breast implantation (BIA-ALCL) is a rare type of non-Hodgkin's lymphoma (NHLs). Primary NHLs account for less than 1% of all breast malignancies and most of them are of B-cell origin, with diffuse large B-cell lymphoma being the most common type. Less than 10% of breast NHLs are of T-cell lineage. Although anaplastic large cell lymphoma (ALCL), a rare T-cell lymphoma, accounts for only 3% of adult NHLs and 6% of with excellent prognosis when diagnosed early and treated effectively, but for this to happen, it is essential that health care professionals and the patient understand its pathology. We reported two cases in a small town during a short period of time. In both cases reported by this study, the patients presented late seroma, associated with pain as a clinical presentation, at 13 and 9 years after the placement of silicone implants with textured polyurethane surfaces. After the procedure, the patients were screened for cancer. Further research with more robust samples is still needed to fully determine the risks and benefits of using textured versus smooth implants. K E Y W O R D S breast implants, lymphoma, large-cell, anaplastic, lymphoma, non-Hodgkin, neoplasm, seroma breast NHLs, it appears to have a tropism for the breast tissue compared to other T-cell lymphomas. 5,6 At present, there have been approximately 600 cases of BIA-ALCL reported worldwide, but there is no consensus on the true incidence rate of BIA-ALCL, since it varies widely from country to country. 7 According to individual studies and national statistics, the incidence currently stands with Germany presenting 7 cases, France 19, Italy 22, the United Kingdom 41, Holland 43, Australia 72 and the USA 149 cases. 3 Furthermore, a review study carried out with 40 government databases showed that there were 363 reported cases of BIA-ALCL worldwide. 8 Brazil is one of the world's leaders in numbers of plastic surgeries performed annually, second only to the United States. Augmentation mammoplasty is the most performed cosmetic surgery in Brazil, with around 275 thousand procedures per year. 1 In the BIA-ALCL Global Report of adverse events associated with breast implants, there were no cases of BIA-ALCL in Brazil, as reported by the National Health Surveillance Agency of Brazil (ANVISA). 8 The lack of reported cases and thus, the incidence, as well as relevant information that could be utilized from such cases, makes it difficult to understand the pathology, treatment and the real risk of women developing BIA-ALCL after implant surgery. Considering the scarcity of cases reported in Brazil, contrary to its outstanding position in augmentation mammoplasty, the objective of this study was to report two cases, of BIA-ALCL, in women with breast implants. They were identified in a city from Southern Brazil. Furthermore, we decided to emphasize the fact that new implants were replaced in both cases because there are no studies, or consensus, on the breast implants revision surgery. The cases were reported based on the 2013 CARE checklist (case report guidelines). The Ethical Committee of the State University of Maringa, number 3.999.625, approved this research in April 30, 2020. All terms of consent from patients, professionals and institutions involved were provided. | Case history A 42-year-old Caucasian female presented with pain and increased volume of the right breast. She had undergone bilateral augmentation mammoplasty with silicone implants 13 years before. After replacement of implants and total capsulectomy, she was diagnosed with BIA-ALCL and has been undergoing oncological follow-up since then. In 2005, the patient underwent breast augmentation for cosmetic purposes, with the SILIMED texturedpolyurethane surface implants, of 265 mL anatomical model, in the subglandular plane through a periareolar incision. After 13 years, she presented a progressively painful increase in volume of the right breast, without any previous trauma, which persisted for 1 month (Figure 1). Diagnosed as a seroma, she underwent fluid puncture and examinations; however, she proceeded with a recurrence of the seroma and surgery was indicated to replace the implants and capsulectomy. In regard to family history, the mother of the patient had had peritoneum cancer and the patient only has a 15-year-old son. The patient was examined seated, with arms hanging by the side of the body. There was an increase in volume of the right breast; nonetheless, there were no changes in skin color or nipple-areola complex, nor were there any skin lesions. Breast asymmetry was visible in terms of volume and there was a more pronounced ptosis of the right breast. Palpation of the breasts with the patient seated and lying on supine position did not detect palpable lymph nodes or nodules. An increase in breast volume was observed mainly on the lateral quadrants of the right breast, with generalized pain during breast palpation. The initial investigation with an ultrasound of the breasts showed that in the right one, there was evidence of surgical manipulation, breast implant with signs of rupture and the presence of free liquid that extended from the 09:00 to the 06:00 o'clock positions, which might have represented a rupture of the implant. No changes were detected in the left breast or the implant. One year before this examination, the patient did a routine breast ultrasound without any changes. It was then decided to further the investigation with an MRI scan, in which there was no evidence of the implants rupture, but periprosthetic fluid to the right was detected. Considering the hypothesis of seroma, it was decided to collect a sample of the liquid with a fine cannula guided by ultrasound. About 90 mL of citrus-colored liquid was drained and sent to laboratory tests and pathological anatomy. Blood tests did not show any significant changes. The liquid was sent for qualitative analysis and presented: pH 7.0, slightly cloudy appearance and light yellow color, protein dosage of 6.0 g/dL, glucose dosage of 37 mg/dL, red blood cell count of 85, leukocyte count of 16 (neutrophils 2%, lymphocytes 98%), and gram bacterioscopy did not detect any bacteria. The liquid was also sent for pathological anatomy and was analyzed through three slides of hematoxylin/eosin staining. Microscopy showed cytological smears composed of macrophages, lymphocytes and sparse neutrophils, over a background of fluid plasma and red blood cells. No signs of malignancy were identified in the sample, favoring the conclusion that a chronic inflammatory process was occurring. Given the recurrence of seroma after 1 month, it was decided to exchange the breast implants, change them to the submuscular plane and perform a bilateral total capsulectomy. After the procedure, the right breast capsule was sent for pathology, as well as the periprosthetic fluid analysis. In the second sample, after the cytology of the liquid through two slides of hematoxylin/eosin staining, the presence of atypical cells were identified (several isolated and pleomorphic cells, with large horseshoe-shaped or reniform nuclei, amidst clear or eosinophilic cytoplasm), which could be classified as neoplastic cell positive ( Figure 2A). The capsule from the right breast was studied with eight blocks containing multiple fragments that showed infiltration in clusters, or a diffuse form, of pleomorphic cells, some with eccentric bean shaped nucleus and clear cytoplasm, while others with also horseshoe shaped nucleus ( Figure 2B). In the internal region of the capsule, there was associated fibrinoid necrosis ( Figure 2C). The diagnosis was atypical lymphoid infiltrate with pleomorphic/anaplastic cells in the right implant capsule of the breast, with the lymphoid infiltrate compromising the entire thickness of the capsule without invasion of the adjacent breast parenchyma. From the examination of the injury-free circumferential surgical margin, an immunohistochemical study was suggested, which revealed positive CD30 immunoexpression (T-cell markers such as CD3 and CD5), epithelial membrane antigen positivity in addition to negativity for anaplastic lymphoma kinase (ALK), confirming immunohistochemical aspects compatible with anaplastic cell lymphoma. The initial approach was to puncture the fluid, guided by ultrasound, and to investigate the fluid that did not initially reveal neoplastic cells. However, with the recurrence of fluid accumulation and, in the interest of improving the aesthetic aspect of the breasts, it was decided to replace the implants for MENTOR textured 400 mL round shape high profile implants, in the partial submuscular plane after bilateral total capsulectomy ( Figure 1B). The diagnosis of BIA-ALCL was made based on the examination of the capsule as well as the identification of the absence of invasion in the adjacent breast parenchyma and surgical margins free of neoplasm, which directed us towards the treatment of the disease. The patient was referred to a clinical oncology team that expanded the investigation with positron emission tomography (PET/CT) with fluorodeoxyglucose (FDG-18F) that did not show abnormal areas with increased glycolytic metabolism. Tomography of the chest and abdomen revealed areas of retraction in the right renal cortex, probably the consequence of chronic inflammation. | Outcome and follow-up After the procedure, the patient underwent oncological follow-up every 6 months, reporting chronic pain in the lateral region of the left breast. She was satisfied with the aesthetic result and showed no signs of recurrence after 1 year of treatment ( Figure 3). | Case history A 30-year-old Caucasian female patient presented with burning pain and a volume increase of the left breast. She had undergone bilateral augmentation mammoplasty with silicone implants 9 years earlier. She was then diagnosed with BIA-ALCL through the analysis of the punctured seroma in the left breast. Implants were removed and bilateral total capsulectomy was performed. In 2010, the patient underwent breast augmentation for aesthetic purposes with the introduction of textured implants from SILIMED with polyurethane surface and 305 mL anatomical shape, in the subglandular plane through an inframammary incision. After 9 years, she presented with a painful volume increase in the breasts bilaterally, more expressive to the left. Two days after the onset of the symptoms, 140 mL of citrus-colored liquid from the left breast was punctured and sent for analysis and immunohistochemistry. BIA-ALCL was diagnosed. We opted for the removal of the implants and bilateral total capsulectomy. The patient had no family history of cancer and denied any comorbidities or continued use of medications. The breasts were inspected with the patient seated and the arms hanging alongside her body, with no changes in the skin color and nipple-areola complex, nor any skin lesion were observed. A greater volume was observed in the left breast. Palpation of the breasts was performed with the patient seated and lying on supine position, and there was no palpable presence of lymph nodes or nodules. | Differential diagnosis, investigations, and treatment Ultrasonography of the breasts was performed and showed a liquid collection around the circumference of the implants, which was larger in the left breast, without solid and cystic nodular formations, and free axillary extensions. After puncture of the left breast fluid, an MRI of both breasts was requested, which showed intact implants without signs of rupture, the presence of fluid around the implants, which was most significant on the right, and that there were tiny bilateral cysts. Blood tests showed no changes. The cytology of the left breast aspirate showed findings suggestive of the inflammatory content of the seroma and was negative for neoplastic cells as well as for the culture of the material. The fluid was also sent to immunohistochemistry where the presence of neoplastic cells was identified, demonstrating positive CD30 expression and negativity for ALK, findings indicative of BIA-ALCL. The implant capsules were sent for anatomopathological and immunohistochemistry tests with no signs of malignancy in the samples and negativity for CD30, indicating pseudosynovial metaplasia and an inflammatory response. After the preoperative diagnosis of BIA-ALCL, it was decided to remove the silicone breast implants and perform a total capsulectomy (Figure 4) without placing new implants. The patient was referred to a clinical oncology team that expanded the investigation using a PET/CT with FDG-18F, which did not show abnormal areas with increased glycolytic metabolism. Tomography of chest and abdomen showed only hepatic hemangioma. | Outcome and follow-up After the procedure, the patient was screened for cancer and reported bilateral hypersensitivity in the breasts. One year after the removal of the implants and capsule, the patient insistently opted for the placement of new implants and the surgery was performed uneventfully, followed by the discharge from the clinical oncology team. | DISCUSSION The lack of well-reported cases of BIA-ALCL in Brazil makes it difficult to establish the precise incidence of this pathology. The availability of these data, particularly in terms of the number of cases and their specific details, could influence in the form of treatment and also offer greater knowledge of the real risks that women may face after undergoing implant surgery. Despite the scarcity of reported cases in Brazil, we found two cases in a city from Southern Brazil. After research in the available literature, the oldest reported case of BIA-ALCL was in 1997. 9 The large cell anaplastic lymphoma associated with the breast implant began to be recognized as a unique disease by the World Health Organization in 2016. 10 The US Food and Drug Administration (FDA) reported 573 cases worldwide, with 33 deaths in 2019. 11,12 Although Brazil is the country with the second largest number of breast implants on the world, there has only been one case of BIA-ALCL, published in 2017, clinically presented as a tumor mass, differing from the typical presentation of seroma. 13 In both cases reported in this study, the patients presented late seroma associated with pain at 13 and 9 years after the placement of silicone implants with textured polyurethane surfaces. The most common presentation of BIA-ALCL is actually a large collection of spontaneous periprosthetic fluids that could occur as early as 1 year of post-surgery, but mostly from 7 to 10 years on average, after the placement of textured surface implants. 14 Other symptoms described include skin rash, 15 capsular contracture 16 and lymphadenopathy. 17 It is necessary to consider the possibility of BIA-ALCL in a patient with persistent late onset of peri-implant seroma or mass (>1 year after implantation), and mass or masses adjacent to the breast implant. 14 The pathogenesis of the disease is still uncertain. Theories relate textured implants to the mammary microbiome, considering that the texture corresponds to a larger surface area, enabling increased bacterial adhesion and biofilm formation, thereby causing greater local inflammatory activity and the potential for malignant transformation 18 associated with genetic predisposition. 19 Silicone has been shown to be immunogenic and to incite a chronic inflammatory response. Saline implants are also often surrounded by an impermeable silicone elastomeric capsule that might be immunogenic by itself. Because chronic inflammation has been associated with development of lymphomas, such as Helicobacter pylori infection in gastric extranodal marginal zone lymphoma, it is possible that chronic inflammatory stimulation may be related to the development of breast implant-associated BIA-ALCL. 20,21 This is in keeping with the fact that BIA-ALCL originates from activated mature cytotoxic T cells. 22 The pathological analysis is essential for making the diagnosis, combined with the cytological analysis of the liquid from seroma, and the histopathology of the implant capsules. Microscopically, the tumor cells are present in the seroma or in the fibrous capsule of the implant. The cells in the effusion fluid are typically identified along the inner surface of the fibrous capsule, either as individual cells, as cell clusters, or occasionally as coherent sheets. Immunophenotyping is still essential in diagnosing, with the anaplastic cells characteristically showing strong and uniform membranous expression of CD30. The tumor cells variably express T cell antigens including CD3 (30%-46%), CD45 (36%) and CD2 (30%), but have low or no expression of CD5, CD7, CD8, and CD15. 23,24 To date, there have been no confirmed cases of BIA-ALCL in patients with an exclusive history of smooth implants use. In 2018, the FDA recognized 30 cases that occurred in patients with smooth implants, however all patients had a mixed history of smooth and textured implants. In both cases in this report, patients had textured implants with SILIMED polyurethane surface, which has the largest surface area and roughness on the market. 25 According to the clinical and pathological evolution of the disease classification, first proposed in 2016 by the MD Anderson Cancer Center, and now included in the 2019 update of the National Comprehensive Cancer Network, case report 1 presented lymphoid capsule without invasion of adjacent breast parenchyma, fitting as stage IC (T3N0M0), while case report 2 can be considered stage IA (T1N0M0), presenting neoplastic cells confined to the liquid. Both were treated with complete excision of the capsules with free margins. 14 In 2016, Clemens and collaborators studied the treatment of 87 patients with BIA-ALCL and concluded that the timely diagnosis and complete surgical excision of lymphoma, implants and surrounding fibrous capsule is the ideal approach for management of patients with this disease. The disease located in the capsule (MD Anderson Cancer Center [MDA] IA-IIA) can only be treated with surgery when complete excision of the capsule and implants is possible. 18 In more advanced cases, where there is infiltration of the breast parenchyma or chest wall, as well as lymph node involvement, the prognosis is worse and adjuvant chemotherapy is necessary, especially in non-resectable neoplasms. 14 There are no studies, or consensus, on the reimplantation of new breast implants. In the two cases reported in this study, both patients underwent implant reintroduction, case 1 prior to diagnosis with a 2-year follow-up, and case 2 only 3 months after removal and diagnosis. Close monitoring is necessary due to the lack of knowledge about the evolution in these cases. Patients with complete response to treatment can be monitored every 3-6 months for 2 years and then as clinically indicated. 14 One of the most critical aspects affecting treatment and the potential for a better procedure is the lack of diagnostic confirmation prior to surgery. 21 In case 1, the search for neoplastic cells from the previously aspirated fluid did not reveal malignancy, but the guidelines recommend that the samples should be sent for cell morphology by cytology, CD30 immunohistochemistry and flow cytometry for evaluation, quantification and characterization of T cells. CD30 immunohistochemistry is a fundamental part of the diagnostic tests for BIA-ALCL. 14 We observed that although studies have reported varied incidences worldwide, it is still an uncommon disease, with an excellent prognosis when diagnosed early and treated effectively. We emphasize that knowledge of the pathology by health care professionals and by the patient is essential for early identification of signs and symptoms. Patients must be informed of the risks, especially in regards to textured implants. There is still no recommendation for the removal of implants prophylactically. Nonetheless, as there is considerable underreporting of cases, professionals need to start doing so, to better understand this pathology, so that more informed decisions can be made. Further research with more robust samples is still needed to fully determine the risks and benefits of using textured or smooth implants. Overall, it is a relatively rare pathology, and the lack of studies are an obstacle for gathering significant data and reaching definitive answers.
2023-08-02T05:11:44.311Z
2023-07-30T00:00:00.000
{ "year": 2023, "sha1": "d482a5954eb2e82a3aa20b9d3f9be7a96ad150e1", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d482a5954eb2e82a3aa20b9d3f9be7a96ad150e1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5257648
pes2o/s2orc
v3-fos-license
MiR-497 Suppresses YAP1 and Inhibits Tumor Growth in Non-Small Cell Lung Cancer Background/Aims: To investigate the expression, clinical significance and the cellular effects of miR-497 in non-small cell lung cancer (NSCLC). Methods: NSCLC cells were transiently transfected with miR-497 mimics or siRNA to up-regulate or down-regulate expression. Quantitative real-time PCR (qRT-PCR) was used to detect the mRNA level of miR-497. Luciferase assays, colony formation assays and BrdU incorporation assays were performed to identify the targets and role of miR-497 in NSCLC cells. Finally, the abundance of miR-497 was analyzed in a total of 51 NSCLC specimens. Results: The transcript levels of miR-497 were significantly decreased in NSCLC tissue (25/30; 83.3%). Low miR-497 levels in tumor tissue correlated with advanced pT stage. Additionally, miR-497 transcript levels correlated with overall survival of NSCLC patients (n = 51, p = 0.022). Overexpression of miR-497 inhibited the proliferation of NSCLC cell and down-regulation of miR-497 resulted in elevated NSCLC growth. Exogenous over-expression of YAP1 partially eliminated miR-497-induced cell growth. Conclusion: miR-497 plays an important role in inhibiting the proliferation of NSCLC by targeting YAP1. Our results suggest that miR-497 is a potential therapeutic target in treating patients with NSCLC. survival rate of patients with NSCLC is less than 16% [3,4]. MicroRNAs (miRNAs) are a class of small non-protein-coding RNAs of approximately 22 nucleotides in size that are endogenously expressed in mammalian cells. miRNAs can posttranscriptionally regulate the expression of hundreds of target genes, thereby acting as oncogenes or tumor suppressors by modulating a wide range of biological functions such as cellular proliferation [5,6], differentiation [7], metastasis [6,8] and apoptosis [9]. MiR-497, a highly conserved miRNA located at Chromosome 17p13.1 [10], was recently found to play important inhibitory roles in malignancies by suppressing cancer cell proliferation [11] and inducing apoptosis [12]. Yes-associated protein 1 (YAP1), a key molecule of the Hippo signaling pathway, has been implicated as an oncogene in many types of malignancies [13]. YAP1 promotes proliferation and tumor growth by regulating several context-specific transcriptional programs [14]. Because YAP1 had been screened out as a potential target of miR-497, the possibility that miR-497 inhibits NSCLC proliferation via targeting YAP1 is intriguing. In this study, we verified that miR-497 plays an inhibitory role in tumor growth in NSCLC cells as previously reported [15]. In addition, we explored the clinical significance of miR-497 in NSCLC patients and discovered that the expression level of miR-497 was associated with the overall survival of patients with NSCLC. The expression level of YAP1 in NSCLC tumor tissues was negatively associated with the level of miR-497. An interference of miR-497 promotes the proliferation of NSCLC cells, whereas ectopic expression of miR-497 results in inhibited proliferation activity by inhibiting YAP1. Cell lines and tumor specimens All of the NSCLC cell lines used in this study were bought from the Cell Culture Center of the Shanghai Institute for Biological Sciences (Chinese Academy of Science, Shanghai, China), including five adenocarcinoma cell lines (A549, H1299, H358, H1975 and H1395); two squamous cell lines (H520 and SK-MES-1); and large cell carcinoma cell line, H460. The cancer cells were grown in monolayer in 1640 culture medium supplemented with 10% fetal bovine serum (FBS) and maintained at 37°C in humidified air with 5% CO2. A total of 51 tumor tissue specimens and 30 corresponding adjacent normal lung tissues were obtained through the Tumor Tissue Bank of Tianjin Cancer Hospital from patients who underwent curative resection for NSCLC at the Tianjin Cancer Institute and Hospital (TJMUCH) between 2007 and 2008. The median follow-up time for overall survival (OS) was 48 months. This study was approved by the Ethics Committee of TJMUCH. All patients signed a written consent for the use of their specimens and disease information for future investigations according to the ethics committee. SiRNA, miRNA, plasmid construction, transfection, and luciferase assays SiRNAs against miR-497 and YAP1 were designed as described [16,17]. A scrambled siRNA sequence (5′ -UUCUCCGAACGUGUCACGUTT-3′) was used as a control. The overexpression of miR-497 mimics was described previously [18]. Non-targeting control (NTC) was used as a control. All of the RNAs were synthesized from Genepharma. The coding domain sequence of human YAP1 mRNA was cloned into pcDNA 3.1 vector (Invitrogen). For transfection, the cells were plated at a density of 5 x 10 5 cells/well in 6-well plates. When the cells reached 80% confluence, 100 pmol of siRNA or 4 µg of DNA were transfected into cells using Lipofectamine 3000 (Invitrogen) for 48 hours according to the manufacturer's instructions. Protein extraction and Western blot analysis Whole-cell extracts were prepared by lysing cells with SDS lysis buffer supplemented with a protease inhibitor cocktail (Sigma). A total of 20 μg of protein lysates were separated by SDS-PAGE and then the target proteins were detected by Western blot analysis with the following primary antibodies: rabbit anti-human Yap1 monoclonal antibody (Abcam), and mouse anti-human Ki-67 polyclonal antibody (Abmart). After further washes, the membranes were incubated with the goat anti-rabbit/mouse peroxidase-conjugated secondary antibodies (Abcam), and the blots were developed using ECL (Millipore). RNA extraction and Real-time PCR Total RNA of the cells and the tissues was extracted using Trizol (Invitrogen) according to the manufacturer's instructions. Then, a total of 3 μg of mRNA was reverse transcribed to single-stranded cDNAs using a reverse-transcription PCR (RT-PCR) system (TaKaRa). qRT-PCR for miR-497, YAP1 and c-myc were performed using SYBR premix real-time PCR Reagent (TaKaRa). The primers for miR-497, U6 and c-myc were listed elsewhere [19,20]. The primer sequences for YAP1 and β-actin were as follows: YAP1 (5' -AGA ACA ATG ACG ACC AAT AGCTC-3', 3' -GCT GCT CAT GCT TAG TCCAC-5'), β-actin (5' -CCT GGG CAT GGA GTC CTGTG-3', 3' -AGG GGC CGG ACT CGT CATAC-5'). The 25-μL PCR reaction mixture contained: 2 μL of reverse transcription product, 1×PCR Master Mix and 0.2 μmol/L forward and reverse primers. U6 RNA was used to normalize the miR-497 RNA levels and β-actin was used to normalize the level of other mRNAs. The results are presented as fold change in cells or tissues. Immunohistochemical staining Immunohistochemical staining for YAP1 of NSCLC patient tissues was performed according to the manufacturer's instructions. In brief, paraffin-embedded sections of NSCLC tissue micro arrays were deparaffinized and then heated in a pressure pot for 3 minutes for antigen retrieval. Then, the sections were incubated with rabbit anti-human YAP1 monoclonal antibody at a 1:500 dilution (Abcam) overnight at 4°C. The slides were then incubated with a goat anti-rabbit/mouse secondary antibody (Maxin) at 37°C for 30 min. A DAB Substrate Kit (Maxin) was used carry out the chromogenic reaction. The results were scored by two experienced pathologic examiners who were unaware of the clinicopathologic data. The intensity of the YAP1 staining was evaluated using the following criteria: 0, negative; 1, low; 2, medium; 3, high. The extent of the staining was scored as 0, 0% stained; 1, 1% to 25% stained; 2, 26% to 50% stained; and 3, 51% to 100% stained. Five random fields (20× in magnification) were evaluated under a light microscope. The final scores were calculated by multiplying the scores of intensity with that of extent. The staining results were divided into two grades by final scores: 0 to 2, low staining; 3 to 9, high staining. The colony formation assay and the 5'-bromo-2'-deoxyuridine (BrdU) incorporation assay A colony formation assay was carried out as described previously [21]. In brief, a six-well plate was seeded at a density of 800 cells per well. The plates were maintained at 37°C in a humidified incubator, and the culture medium was replaced every 3 days. After 2 weeks, the cells were stained with 0.5% Crystal Violet and the average colony number of five random fields (4× in magnification) was counted under a microscope. A BrdU ELISA kit (Roche) was also used to measure the proliferation activity [22]. Briefly, the cells were cultured in a 96-well plate overnight. Subsequently, BrdU was added to the culture medium for 2 hours. Then, the anti-BrdU-POD binds to the BrdU incorporated into newly synthesized DNA. Finally, the reaction product was quantified by the absorbance value (A370nm-A490nm). Statistical analysis Statistical analyses were carried out using the IBM SPSS Statistics Program 19.0 (Armonk, New York, the United States). Each experiment was performed in triplicate and the values are presented as the mean±SD. Student's t test for unpaired data was used to compare the mean values. Kaplan-Meier curves and log-rank tests were calculated for the YAP1 expression level. Pearson correlations were run to measure the association between pairs of variables. All of the probability values had a statistical power level of 90% and a 2-sided level of 5%. P<0.05 was considered significant. Cellular Physiology and Biochemistry Cellular Physiology and Biochemistry miR-497 inhibits cell proliferation in NSCLC The expression level of miR-497 was detected in 8 NSCLC cell lines using qRT-PCR. Three cell lines (A1395, and SK-MES-1 and H520) had high levels of miR-497, whereas other cell lines (H358, A549, H1975, H1299 and H460) presented relative lower expression levels (Fig. 1A). To explore the role of miR-497 in proliferation, H520 (high miR-497 expression) and H1299 (low miR-497 expression) cells were transiently transfected with anti-miR-497 siRNA and miR-497 mimics, respectively. Then, the transfected cells were subjected to proliferation assays. BrdU incorporation assays were performed, and growth curves were drawn. As shown in Fig. 1B, H520 cells transfected with anti-miR-497 showed significantly higher growth activity than the control cells (H520 with scrambled RNA). The differences emerged at 24 h and peaked at 48 h after transfection. In contrast, the H1299 cells with the miR-497 mimic had significantly reduced proliferation activities compared to H1299 cells transfected with NTC at 48 h and 72 h. H520 cells transfected with anti-miR-497 siRNAs displayed an elevated Ki-67 level, whereas H1299 cells transfected with miR-497 mimics showed decreased Ki-67 expression (Fig. 1C). Additionally, in the colony formation assays, the transfection of the miR-497 mimic reduced colonies (Fig. 1D, P<0.05), whereas anti-miR-497 siRNAs increased the number of colonies (Fig. 1D, P<0.05). Taken together, these data revealed a critical role of miR-497 in the proliferation of NSCLC cells. Fig. 1. miR-497 inhibits proliferation in NSCLC cell lines. A, miR-497 expression in 8 NSCLC cell lines was assayed using qRT-PCR. U6 was used as a control to normalize the CT values. The resulting value was then transformed by applying 2 -ΔCT . The mean ± SD of normalized C T was from 3 independent experiments. B, proliferation activity was evaluated with BrdU incorporation assays in H520 and H1299 cells that were transiently transfected with siRNA and mimic for miR-497, respectively. Scrambled RNA and non-targeting control RNA was used as a control. The mean ± SD of the absorbance was from 3 independent experiments. Forty-eight hours after transfection, H520 and H1299 cells were used to detect the proliferation marker, Ki-67 (C), and the colony formation assay (D). The colonies were stained with crystal violet after an incubation for 2 weeks (left). The mean ± SD of the colonies from 3 independent experiments is shown to the right. *, P<0.05; **, P<0.001. YAP1 is negatively regulated by miR-497 The 3483 nt-3'-UTR of YAP1 was screened out for complementarity with the sequence of miR-497 via a bioinformatic search in Targetscan. A 7mer-m8 (exact match to positions 2-8 of the mature miRNA) target sequence for miR-497 at nt 162-168 of the 3'-UTR of YAP1 was found. A wild type or mutant 3'-UTR of the YAP1 gene was inserted into the dual luciferase vector pmirGLO ( Fig. 2A). In addition to the luciferase plasmid, the miR-497 mimics or anti-miR-497 SiRNAs were cotransfected into HEK293T cells. The transfection efficiency was validated using qRT-PCR (Fig. 2B & 2C, top). The results of the luciferase assays showed that syn-miR-497 mimics significantly inhibited the luciferase activity of the wild type 3'-UTR reporter gene by 79.1% ± 6.7% in HEK293T cells. As expected, syn-miR-497 did not influence the luciferase activity of the mutant 3'-UTR reporter gene (Fig. 2B, bottom). In contrast, the luciferase activity of the reporter gene was significantly elevated when anti-miR-497 siRNAs were cotransfected with the wild-type reporter (Fig. 2C, bottom). These results confirmed that YAP1 was a direct target gene of miR-497. In addition, transfection with the syn-miR-497 mimics led to a 58.3% reduction of YAP1 protein expression in H1299 cells (Fig. 2D). In contrast, anti-miR-497 siRNAs elevated the protein level of YAP1 by 41.6% in H520 cells (Fig. 2E). Additionally, the mRNA level of YAP1 was negatively regulated by miR-497 (Fig. 2F). Taken together, these data suggest that miR-497 suppresses YAP1 expression by directly targeting the 3'-UTR of the YAP1 gene. miR-497 suppresses the proliferation of NSCLC by targeting YAP1 The correlation of the expression levels of miR-497 and YAP1 in 8 NSCLC cell lines was investigated. Five cell lines (H358, A549, H1975, H1299 and H460) had low miR-497 levels and high YAP1 levels. In contrast, the other three cell lines (A1395, SK-MES-1 and H520) had high miR-497 levels and low YAP1 levels. Pearson correlation analysis revealed that the level of YAP1 significantly correlated to that of miR-497 in these cell lines ( Fig. 3A; rs = -0.830, P = 0.011). Subsequently we investigated whether miR-497 inhibits proliferation by directly targeting YAP1. Syn-miR-497 and pcDNA 3.1-YAP1 were cotransfected into H1299 cells, with NTC and pcDNA 3.1 vector as controls. The proliferation activity of the transfected cells was assessed using BrdU assays (Fig. 3B). The results revealed that syn-miR-497 significantly reduced the proliferation activity of H1299 cells compared to NTC. However, pcDNA 3.1-YAP1 could partially antagonize this growth-inhibiting effect of syn-miR-497. Moreover, the protein levels of c-myc, a downstream gene of YAP1 [23], as well as Ki-67, which is usually used as a proliferation marker [24], were detected 48 hours after cotransfection (Fig. 3C). As expected, the levels of c-myc and Ki-67 in H1299 cells cotransfected with pcDNA 3.1 and syn-miR-497 were lower than pcDNA 3.1 and NTC. Interestingly, when pcDNA 3.1-YAP1 was cotransfected with syn-miR-497, the levels of c-myc and Ki-67 were not reduced as much as in cells cotransfected with pcDNA 3.1 and syn-miR-497. These data suggested that miR-497 suppresses downstream gene expression and inhibits cell proliferation by targeting YAP1. Cellular Physiology and Biochemistry Cellular Physiology and Biochemistry Inverse correlation between YAP1 and miR-497 expression in NSCLC patients To explore the expression of miR-497 in NSCLC tissues, we examined the miR-497 level in tumor tissues and the corresponding adjacent normal lung tissues in a total of 30 patients with NSCLC (Fig. 4A). The data showed that 25 of the 30 cases (83.3%) had decreased miR-497 levels in tumor tissues compared to the adjacent normal tissues. The expression levels of miR-497 and YAP1 were detected in tumor tissues of 51 patients with NSCLC by qRT-PCR and immunohistochemical staining, respectively (Fig. 4A). Pearson correlation analysis revealed that the expression of miR-497 was inversely correlated with YAP1 in the 51 patients (rs = -0.526; P < 0.001; Fig.4C). The relationship between the level of miR-497 and the clinicopathologic parameters of NSCLC are summarized in Table 1. The 51 patients were stratified into two groups by the median value of the miR-497 level. No significant correlation was noticed between the miR497 level and age, gender, pathologic type, pN stating, or differentiation. However, pT staging had a significant association with the miR-497 level. In addition, we investigated the prognostic significance of miR-497 in these patients with NSCLC. Kaplan-Meier analysis revealed that the patients with a low miR-497 level had poorer survival (Fig. 4D). The median overall survival of the low miR-497 group and high miR-497 group was 25.08 ± 3.67 months and 31.64 ± 2.08 months, respectively (P=0.022). These findings support our finding that miR-497 promotes NSCLC progression via targeting YAP1. The results are presented as the division of the C T value of tumor specimens divided by the corresponding normal tissues. The mean ± SD was from 3 independent experiments. C, Pearson analysis for the correlation of YAP1 and miR-497 expression levels in patients with NSCLC (n=51; R=-0.528; p<0.001). YAP1 staining intensity from 24 random regions in A was quantified using ImageJ software. miR-497 expression levels were normalized to U6 and presented as the ΔC T value. The mean ± SD was calculated from 3 independent experiments. D. Kaplan-Meier analysis of survival of 51 patients with NSCLC (P =0.022 by log-rank test) according to the expression of miR-497. The vertical bars on the survival curves indicate censored cases.
2018-04-03T04:04:59.306Z
2015-08-01T00:00:00.000
{ "year": 2015, "sha1": "2eabfa9b4190175a8a467fb0f231e5a4a36ed76f", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/430358", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2eabfa9b4190175a8a467fb0f231e5a4a36ed76f", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
204542529
pes2o/s2orc
v3-fos-license
Embryonic mesothelial-derived hepatic lineage of quiescent and heterogenous scar-orchestrating cells defined but suppressed by WT1 Activated hepatic stellate cells (aHSCs) orchestrate scarring during liver injury, with putative quiescent precursor mesodermal derivation. Here we use lineage-tracing from development, through adult homoeostasis, to fibrosis, to define morphologically and transcriptionally discreet subpopulations of aHSCs by expression of WT1, a transcription factor controlling morphological transitions in organogenesis and adult homoeostasis. Two distinct populations of aHSCs express WT1 after injury, and both re-engage a transcriptional signature reflecting embryonic mesothelial origin of their discreet quiescent adult precursor. WT1-deletion enhances fibrogenesis after injury, through upregulated Wnt-signalling and modulation of genes central to matrix persistence in aHSCs, and augmentation of myofibroblastic transition. The mesothelial-derived lineage demonstrates punctuated phenotypic plasticity through bidirectional mesothelial-mesenchymal transitions. Our findings demonstrate functional heterogeneity of adult scar-orchestrating cells that can be whole-life traced back through specific quiescent adult precursors to differential origin in development, and define WT1 as a paradoxical regulator of aHSCs induced by injury but suppressing scarring. Statistics For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section. n/a Confirmed The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section. A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted Give P values as exact values whenever suitable. For Bayesian analysis, information on the choice of priors and Markov chain Monte Carlo settings For hierarchical and complex designs, identification of the appropriate level for tests and full reporting of outcomes Estimates of effect sizes (e.g. Cohen's d, Pearson's r), indicating how they were calculated Our web collection on statistics for biologists contains articles on many of the points above. Software and code Policy information about availability of computer code Data collection Data analysis For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors/reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Research guidelines for submitting code & software for further information. Data Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A list of figures that have associated raw data -A description of any restrictions on data availability Field-specific reporting Please select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection. All studies must disclose on these points even when the disclosure is negative. Research sample For mechanistic animal studies where fibrosis as the readout was assessed a power calculation was performed per Home Office project licence. For in vitro and transcriptomic studies, experiments were undertaken on a minimum of three independent replicates. Data has only been excluded in the case of technical failure. All in vitro experiments were repeated a minimum of three times. Animals were randomised to treatment groups. For in vivo studies, investigators were blind to genotype during data collection and analysis. Investigators were also blinded to treatment groups for transcritpomic studies. Blinding to treatment groups for histological assessment was not possible as injury is macroscopically and microscopically apparent. Briefly describe the study type including whether data are quantitative, qualitative, or mixed-methods (e.g. qualitative cross-sectional, quantitative experimental, mixed-methods case study). State the research sample (e.g. Harvard university undergraduates, villagers in rural India) and provide relevant demographic information (e.g. age, sex) and indicate whether the sample is representative. Provide a rationale for the study sample chosen. For studies involving existing datasets, please describe the dataset and source. Describe the sampling procedure (e.g. random, snowball, stratified, convenience). Describe the statistical methods that were used to predetermine sample size OR if no sample-size calculation was performed, describe how sample sizes were chosen and provide a rationale for why these sample sizes are sufficient. For qualitative data, please indicate whether data saturation was considered, and what criteria were used to decide that no further sampling was needed. Provide details about the data collection procedure, including the instruments or devices used to record the data (e.g. pen and paper, computer, eye tracker, video or audio equipment) whether anyone was present besides the participant(s) and the researcher, and whether the researcher was blind to experimental condition and/or the study hypothesis during data collection. Indicate the start and stop dates of data collection. If there is a gap between collection periods, state the dates for each sample cohort. If no data were excluded from the analyses, state so OR if data were excluded, provide the exact number of exclusions and the rationale behind them, indicating whether exclusion criteria were pre-established. State how many participants dropped out/declined participation and the reason(s) given OR provide response rate OR state that no participants dropped out/declined participation. If participants were not allocated into experimental groups, state so OR describe how participants were allocated to groups, and if allocation was not random, describe how covariates were controlled. Briefly describe the study. For quantitative data include treatment factors and interactions, design structure (e.g. factorial, nested, hierarchical), nature and number of experimental units and replicates. describe the data and its source. Note the sampling procedure. Describe the statistical methods that were used to predetermine sample size OR if no sample-size calculation was performed, describe how sample sizes were chosen and provide a rationale for why these sample sizes are sufficient. Describe the data collection procedure, including who recorded the data and how. Indicate the start and stop dates of data collection, noting the frequency and periodicity of sampling and providing a rationale for these choices. If there is a gap between collection periods, state the dates for each sample cohort. Specify the spatial scale from which the data are taken If no data were excluded from the analyses, state so OR if data were excluded, describe the exclusions and the rationale behind them, indicating whether exclusion criteria were pre-established. Describe the measures taken to verify the reproducibility of experimental findings. For each experiment, note whether any attempts to repeat the experiment failed OR state that all attempts to repeat the experiment were successful. Describe how samples/organisms/participants were allocated into groups. If allocation was not random, describe how covariates were controlled. If this is not relevant to your study, explain why. Describe the extent of blinding used during data acquisition and analysis. If blinding was not possible, describe why OR explain why blinding was not relevant to your study. Describe the study conditions for field work, providing relevant parameters (e.g. temperature, rainfall). State the location of the sampling or experiment, providing relevant parameters (e.g. latitude and longitude, elevation, water depth). Describe the efforts you have made to access habitats and to collect and import/export your samples in a responsible manner and in compliance with local, national and international laws, noting any permits that were obtained (give the name of the issuing authority, the date of issue, and any identifying information). Describe any disturbance caused by the study and how it was minimized. Note that full information on the approval of the study protocol must also be provided in the manuscript. Human research participants Policy information about studies involving human research participants Population characteristics Recruitment Ethics oversight Note that full information on the approval of the study protocol must also be provided in the manuscript. All antibodies are commercially available and validated by the manufacturer for method and species. State the source of each cell line used. Describe the authentication procedures for each cell line used OR declare that none of the cell lines used were authenticated. Confirm that all cell lines tested negative for mycoplasma contamination OR describe the results of the testing for mycoplasma contamination OR declare that the cell lines were not tested for mycoplasma contamination. Name any commonly misidentified cell lines used in the study and provide a rationale for their use. Provide provenance information for specimens and describe permits that were obtained for the work (including the name of the issuing authority, the date of issue, and any identifying information). Indicate where the specimens have been deposited to permit free access by other researchers. If new dates are provided, describe how they were obtained (e.g. collection, storage, sample pretreatment and measurement), where they were obtained (i.e. lab name), the calibration program and the protocol for quality assurance OR state that no new dates are provided. The animals used for injury modelling in this study were all on a C57Bl/6 background aged within 8-12 weeks old and >20g body weight at the start of the experiments. Male mice were used for in vivo studies; male and female mice were used for in vitro studies. WT1GFP/+ knockin reporter mice were originally provided by H. Sugiyama (Osaka University School of Medicine, Japan). WT1CreERT2/+;Ai14 mice for lineage tracing were created by crossing knockin mice expressing tamoxifen-inducible Cre recombinase at the WT1 promoter locus (WT1CreERT2/+) with the Ai14 Cre reporter. The WT1-conditional background (WT1fl/ fl) was used to create lines allowing WT1 deletion. The constitutive PDGFRCre line was obtained from N. Henderson (University of Edinburgh). Nil All animal experiments were carried out under procedural guidelines, severity protocols and with ethical approval from the University of Edinburgh Animal Welfare and Ethical Review Body (AWERB) and the Home Office (UK). Anonymised tissue only used for illustration in human disease. Two cases of alcoholic liver disease, two cases of primary biliary cirrhosis, two cases of cryptogenic cirrhosis, and one each of chronic viral hepatitis (HCV), and primary sclerosing cholangitis. No recruitment needed. Confirm that both raw and final processed data have been deposited in a public database such as GEO. Confirm that you have deposited or provided access to graph files (e.g. BED files) for the called peaks. Data access links May remain private before publication. The axis labels state the marker and fluorochrome used (e.g. CD4-FITC). Files in database submission The axis scales are clearly visible. Include numbers along axes only for bottom left plot of group (a 'group' is an analysis of identical markers). All plots are contour plots with outliers or pseudocolor plots. A numerical value for number of cells or percentage (with statistics) is provided. Methodology Sample preparation Software Provide the trial registration number from ClinicalTrials.gov or an equivalent agency. Note where the full trial protocol can be accessed OR if not available, explain why. Describe the settings and locales of data collection, noting the time periods of recruitment and data collection. Describe how you pre-defined primary and secondary outcome measures and how you assessed these measures. For "Initial submission" or "Revised version" documents, provide reviewer access links. For your "Final submission" document, provide a link to the deposited data. Provide a list of all files available in the database submission. Provide a link to an anonymized genome browser session for "Initial submission" and "Revised version" documents only, to enable peer review. Write "no longer applicable" for "Final submission" documents. Describe the experimental replicates, specifying number, type and replicate agreement. Describe the sequencing depth for each experiment, providing the total number of reads, uniquely mapped reads, length of reads and whether they were paired-or single-end. Describe the antibodies used for the ChIP-seq experiments; as applicable, provide supplier name, catalog number, clone name, and lot number. Specify the command line program and parameters used for read mapping and peak calling, including the ChIP, control and index files used. Describe the methods used to ensure data quality in full detail, including how many peaks are at FDR 5% and above 5-fold enrichment. Describe the software used to collect and analyze the ChIP-seq data. For custom code that has been deposited into a community repository, provide accession details. Primary hepatic stellate cells from injured or uninjured animals were obtained by density centrifugation and examined immediately after isolation of after culture on plastic. BD FACS Aria II BD FACSDiva was used for data collection. FlowJo was used for data analysis. October 2018 Cell population abundance
2019-10-15T14:40:02.042Z
2019-10-15T00:00:00.000
{ "year": 2019, "sha1": "8bf42963338c8851c29443bfb2c955961b4b139b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-019-12701-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9c96b858fb9dfdd5c7d7edb8bdd18cdf777eaf67", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237844046
pes2o/s2orc
v3-fos-license
A novel approach for solving stochastic problems with multiple objective functions In this paper we suggest an approach for solving a multiobjective stochastic linear programming problem with normal multivariate distributions. Our approach is a combination between a multiobjective method and a nonconvex technique. The problem is first transformed into a deterministic multiobjective problem introducing the expected value criterion and an utility function that represents the decision makers preferences. The obtained problem is reduced to a mono-objective quadratic problem using a weighting method. This last problem is solved by DC (Difference of Convex) programming and DC algorithm. A numerical example is included for illustration. Introduction Multiobjective stochastic linear programming (MOSLP) is an appropriate tool to model many concrete real-life problems because it is not obvious to have the complete data about the parameters. So, to deal with this type of problems it is required to introduce a randomness framework. Such a class of problems includes investment and energy resources planning [2,30,35], manufacturing systems in production planning [13,14], mineral blending [18], water use planning [7,10] and multi-product batch plant design [36]. Among the applications of MOSLP in portfolio selection, we can mention the recent works of Shing and Nagasawa [28], Ogryczak [25], Ballestero [5] and Aouni [4]. In order to obtain solutions for MOSLP problems, it is necessary to combine techniques used in stochastic programming and multiobjective programming. From this, two approaches can be considered, both of them involve a double transformation, consisting on the transformation of the multiobjective problem into a mono-objective problem and the stochastic problem into its equivalent deterministic one. The difference between the two approaches is the order in which the transformations are carried out. Ben Abdelaziz [7] and Ben Abdelaziz et al. [8] qualified as multiobjective approach the perspective which transform first, the stochastic multiobjective problem into its equivalent multiobjective deterministic problem, and stochastic approach the technique that transform in first the stochastic multiobjective problem into a monobjective stochastic problem. As we have known in the MOSLP problems, the coefficients of the problem are assumed as random variables with known distributions in most of cases. However, the specifications of the distributions are very subjective. Many researchers invoke the discrete distribution. For instance, we can mention the STRANGE method proposed by Teghem et al. [31], the recourse method using a two stage mathematical programming model by Klein et al. [17], the STRANGE-MOMIX of Teghem [32], the cutting plane methods by Abbas and Bellahcene [1], Amrouche and Moulai [3], Chaabane and Mebrek [12]. Publications dealing with continuous distributions are very few in number and use, in general, the Gaussian (normal) distributions with different parameters. In this context, Stancu-Minasian [29] describe a sequential method for solving MOSLP problem where several probabilities are maximized, Goicoechea et al. [16] present the Probabilistic Trade-off Development Method or PROTRADE which treats problems with general distributions for the random coefficients of linear objectives, Munoz and Ruiz [24] developed the ISTMO method which uses the Kataoka criterion to handle the randomness and combines the concept of probability efficiency for stochastic problems with the reference point philosophy for deterministic multiobjective problems, Bellahcene and Marthon [6] suggest a bisection based method that generates a compromise solution to MOSLP problems in which the objective functions parameters are random variables with multivariate distributions. In this paper, a novel method for solving MOSLP problem with normal multivariate distributions is proposed. First, we assume that decision makers preferences can be represented by exponential utility functions ( One can use the same function for all the objectives). This assumption is motivated by the fact that exponential utility function will lead to an equivalent quadratic problem which can be solved by a DC (Difference of Convex functions) method. The DC programming and DC Algorithm have been introduced by Pham Dinh Tao in their preliminary form in 1985 and developed by Le Thi and Pham Dinh since [19][20][21][22]. This method has proved its efficiency in a large number of nonconvex problems [23,26,27]. Remainder sections of this paper are organized as follows: in section 2, the problem formulation is given. In section 3, we analyze our new formulation for the problem considering the particular structure induced by the combined use of utility functions and the weighting method. The new formulation results in a quadratic problem that can be solved efficiently by a DC algorithm. Section 4 shows how to apply the DC programming and DCA for the resulting problem. Our experimental results are presented in section 5. Problem statement Let us consider the multiobjective stochastic linear programming problem formulated as follows: where x = (x 1 , x 2 , ..., x n ) denotes the n-dimensional vector of decision variables. The feasible set S is a subset of n-dimensional real vector space R n characterized by a set of linear inequality constraints of the form Ax ≤ b; where A is an m × n coefficient matrix and b an m-dimensional column vector. We assume that S is nonempty and compact in R n . Each vectorc k follows a normal distribution with mean c k and covariance matrix V k . Therefore, every objectivec k x follows a normal distribution with mean µ k =c k x and variance σ 2 k = x t V k x. In the following section, we will be mainly interested in the main way to transform problem (1) into an equivalent multiobjective deterministic problem which in turn will be reformulated as a DC programming problem. Transformations and Reformulation First, we will take into consideration the notion of risk. Assuming that decision maker's preferences can be represented by utility functions, under plausible assumptions about decision maker's risk attitudes, problem (1) is interpreted as: The utility function U is generally assumed to be continuous and convex. In this paper, we consider an exponential utility function of the form U (r) = 1 − e −ar , where r is the value of the objective and a the coefficient of incurred risk (a large corresponds to a conservative attitude). Our choice is motivated by the fact that exponential utility functions will lead to an equivalent quadratic problem which encouraged us to design a DC method to solve it simply and accurately. Therefore, if r ∼ N (µ, σ 2 ), we have: Our aim is to search for efficient solutions of the multiobjective deterministic problem (2) according to the following definition: Applying the widely used method for finding efficient solutions in multiobjective programming problems, namely the weighting sum method [8,11] we assign to each objective function in(2) a non-negative weight w k and aggregate the objectives functions in order to obtain a single function. Thus, problem (2) is reduced to: (2) if and only if x * ∈ S is optimal for problem (4). (4) is a linear function of the random objectivesc t k x; its variance depends on the variances ofc t k x and on their covariances. Since eachc k x follows a normal distribution with mean µ k and covariance σ 2 k , the function F (x,c t ) follows a normal distribution with mean µ and covariance σ 2 where, where σ ks denotes the covariance of the random objectivesc t k x andc t s x. Finally, we obtain the following quadratic problem: or wherec k = (c k1 ,c k2 , ...,c kn ) is the k-th component of the expected value of the random multinormal vectorc, V ks and V k are elements of the positive definite covariance matrix V ofc: The solution method In this section, we present briefly the DC programming approach developed for solving nonconvex problems. For more details, see [23,26,27]. And we use DCA for solving problem (8). Review of DC programming and DCA A general DC program has the form: where g, h are lower semicontinuous proper convex functions on R n called DC components of the DC function f while g − h is a DC decomposition of f . The duality in DC associates to problem (9) the following dual program: where g * and h * are respectively the conjugate functions of g and h. The conjugate function of g is defined by: (11) From [21], the most used necessary optimality conditions for problem (9), is: DCA constructs two sequences {x i } and {y i } (candidates for being primal and dual solutions, respectively), such that their corresponding limit points satisfy the local optimality conditions (12) and (13). There are two forms of DCA: the simplified DCA and the complete DCA. In practice, the simplified DCA is most used than the complete DCA because it is less time consuming [19]. The simplified DCA has the following scheme: Simplified DCA Algorithm Step 1 : Let x 0 ∈ R n given. Set i = 0. Step 4 : If a convergence criterion is satisfied, then stop, else set i = i + 1 and goto step 2. We also can note that: ( [19][20][21][22]) -DCA is a descent method without linesearch. with where χ S (.) is the indicator function of the set S and Since the matrix V is positive definite, h is a convex function. For the function g, sincec is the vector of expected values of the random multinormal vectorc, it will e easy to demonstrate the convexity of g and make conditions for each vectorc. After that, we will compute the two sequences {x i } and {y i } such that y i ∈ ∂h(x i ) and x i+1 ∈ ∂g * (y i ). Computation of y i : We choose y i ∈ ∂h(x i ) = ∇h(x i ) . It is equivalent to calculate: Computation of x i : We can choose x i+1 ∈ ∂g * (y i ) as the solution of the following convex problem The solution x i is optimal for the problem (14) if one of the following conditions is verified Finally, the DC Algorithm that we can apply to problem (8) with the decomposition (14) can be described as follows: Step 4 : If one of the conditions (17) or (18) is verified, then stop x i+1 is optimal for (14), else set i = i + 1 and goto step 2. Experimental Results In order to investigate the potential of DCA when applied to the considered problem, we implemented it and tested it on two small problems similar to the mathematical model (1). The first is taken from [11] to show the efficiency of the algorithm. The second example is given to present the performances of DCAMOSLP according to the variations of the weights and the risk parameter . Our results are compared in terms of running time and number of iterations to those given by the solver LINGO [33,34]. Example 1: Let us consider the following stochastic bi-objective programming problem: In [11], the non dominated solution obtained for w = (0.8, 0.2) t is (3, 0.5). Now, we solve the same problem test by DCAMOSLP algorithm for different values of the risk parameter a while keeping the same weight vector w = (0.8, 0.2) t . For this, we choose an acceptable tolerance error = 10 −6 for the optimality test and set x 0 = (0, 0) as initial point. The results of this application are shown in Table 1 where nbr it is the number of iterations. We can observe that the non-dominated solution (3, 0.5) is obtained for values of parameter a ≤ 10 −2 . We also note that the number of iterations decreases with the decrease of the parameter a. Example 2: Now we will test the performance of DCAMOLSP algorithm on the problem below which has three objective functions and a larger set of feasible solutions. with c = (5, −2, 3, 6, 8, 4) and positive definite covariance matrix: The results of this application for different values of parameter and the weight vector are given in Table 2 followed by the results given by LINGO software in table 3 for the same parameter and weights. From these results, we observe that the algorithm DCAMOSLP gives efficient solutions of the studied multiobjective stochastic problem for small values of the incurred risk (a ≤ 10 −2 ). The number of iterations decreases with the decrease of this parameter. We also note that, proposed DCAMOLSP algorithm finds the same solutions as LINGO and that it is more efficient than LINGO in terms of CPU time and number of iterations required to reach the optimum. Conclusion We have presented a DC programming based method for solving a multiobjective stochastic linear programming problem with multivariate normal distributions in which the objective functions should be minimized. According to the computational experiments, our method outperforms -in terms of number iterations and running time -the solver LINGO. A novel contribution to this issue would consist of considering real problems and comparing the results with those of other methods and solvers used in multiobjective stochastic optimization.
2021-09-28T01:10:24.727Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "3f8af7e4fe744946ead7fee5a0124fc70ebcaa90", "oa_license": "CCBY", "oa_url": "https://www.rairo-ro.org/articles/ro/pdf/2021/05/ro190188.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "ebb7a66301911b1c3f8e3c5fd2c3f953cf2488f6", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
58890550
pes2o/s2orc
v3-fos-license
Performance Study of Locality and Its Impact on Peer-to-Peer Systems This paper presents the measurement study of locality-aware Peer-to-Peer solutions on Internet Autonomous System (AS) topology by reducing AS hop count and increase nearby source nodes in P2P applications. We evaluate the performance of topology-aware BT system called TopBT with BitTorrent (BT) by constructing AS graph and measure the hops between nodes to observe the impact of quality of service in P2P applications. Introduction Peer-to-Peer (P2P) is a distributed computing model which aims to share resources whose concept is not completely new.However, P2P systems are natural evolution in decentralized system architecture [1] in which peer is a node that can act as a client and server simultaneously in dynamic environment.Nodes can join or leave the system freely and also exchange resources directly without the help of a third party server.Popular P2P systems generate massive amount of traffic over the internet and it has been reported that 65% -70% Internet backbone is P2P traffic.Furthermore, it may be estimated that 50% -65% of download traffic and 75% -90% of upload traffic is generated by P2P traffic access communities [2].P2P networks can be classified according to their functionalities into three main classes such as file sharing, video streaming and VoIP.File sharing P2P applications like BitTorrent (BT), TopBT are the most popular among the three classes whereas video streaming classes applications are PPLive, PPStream and Voip application is Skype. Zatto [3] was introduced as a localized P2P live streaming system and Skype [4] was modified to implement locality in super peers selections.Finally, Top-BT [5] was introduced as localized version of the BitTorrent software which is developed by OHIO State University R&D Dept., that actively discovers its network proximi-ties to its connected peers, this unique feature separates it from Bit torrent. It also improves the peers transmission rate of network for a faster download, reduces topology un-awareness due to unnecessary traffic and maintains faster download speed compared to other clients [6].BitTorrent which is a well-known non-localized file sharing software in P2P networking used in common for transferring large file in a vast community environment with an exceptional download speed [7]. The popularity of P2P applications had massive traffic load that revealed doubts about the ability of internet service provider (ISP) that carried P2P traffic [8] and to sustain their cost of transit traffic.Due to these reasons and others inspired research to replace P2P random algorithms with locality-awareness algorithms where locality is a distance measurement method that can be utilized to express locality awareness. Peer-to-peer (P2P) locality has recently raised lots of interest locally as its written content distribution dramatically raises the traffic within the inter-ISP links, in order to solve this problem the idea to keep a fraction in the P2P site visitors local to help each ISP has been introduced a couple of years ago.Several fundamental issues on locality are being explored such as measuring the content distribution and knowing the harmful effect of locality which intensify the demand of the content file that is shared on the network.P2P applications and ISPs have different lanes of business models that attempt to attract more users by increasing quality of service (QoS). The fact that allowed P2P application developers to consider underlying networks as free resources and on the other hand ISP's attempting to drag down their inter/intra-domain traffic to increase their profits [9].This business model authorized ISP's consider P2P applications as a harmful services and thus started to domesticate them by blocking their traffic with the help of shaping devices [5] and on the other side P2P applications counter-strike by encrypting their traffic using port hoping that leads to endless chasing. However to tackle ISP issues, Autonomous system (AS) hops can be utilized to harvest AS-level to pology information and closely relate the AS-based ISP pricing model.Locality awareness algorithm implementation in P2P application was widely studied in the past years [10]- [12].Each network on the Internet is recognized by a unique identifier known as Autonomous system number (ASN) which owns a set or a block of Internet Protocol (IP) addresses that have been assigned to it, in order to prevent traffic from propagation, content should be exchanged with other IP addresses in the same AS. Sniffing is one of the most effective techniques in attacking a wireless network.Sniffer [13] is a program that eavesdrops on the network traffic by grabbing information that travels over a network and the Source for many network-based attacks is passive sniffing.Passive sniffing involves employing a sniffer to be able to monitor these kinds of incoming packets which uses a feature connected with network greeting cards called promiscuous mode.In this mode a network card will pass all packets on the operating structure, rather than those Unicast as well as broadcast towards host [13]. World's prime network protocol analyzer named Wireshark [14] enables to capture and interactively browse the traffic flowing on a computer network.This software is customary across many industries and educational institutions.Wireshark uses a packet capture in short Pcap, an application programming language to capture packets, so it can only capture the packets on the types of networks that Pcap supports.Yi Cui, et al., [15] proposes locality awareness in bit torrent like P2P applications which proposes an optimal solution with minimum AS hop count distribution structure and also describes that seeding cannot improve standard bit torrent download time but can improve its locality policies significantly. The paper is organized as follows.Section 2 depicts the methodology of achieving goals of our study.Results analysis and performance of Locality and TopBT is studied and discussed in Section 3. Finally, Section 4 gives conclusion and possible future work to improve the quality of service in P2P applications. Methodology and Data Collection Our methodology of collecting data is to download Torrent files using two different file sharing applications such as Bit Torrent and Top-BT, which were operated in two separate computers.The download time of both Torrent clients was calculated and recorded simultaneously.Wireshark captures and save data packets of both Torrent clients.Then, a utility software was used to extract source and destination IP addresses form Wireshark captured files. AWK tool [16] was used to delete the duplicate of the source and destination IP addresses.Cymru tool [17] had been utilized by which IP addresses were converted into Autonomous System Numbers (ASN).Java code was developed to find the AS paths from source IP address to destination IP address.By extracting these paths, we then compared the paths generated by the BitTorrent and Top-BT applications.This procedure had been applied on different file formats (Audio, Video, Application files, etc.).Hence the whole data was collected at particular geographical location. During the downloading of some files we came across a huge download time duration by which a user may lose interest in downloading that particular file.Investigation of the scenario was done to show the impact of locality on the quality of experience.Our calculation was based on the Autonomous System Number and after gathering these numbers, AS paths has been our metric to measure locality. The following steps were taken to achieve our goal.First we reviewed the concepts of Peer to Peer network (P2P) and Locality revision on the P2P applications.Then, Wireshark and AWK software tools were used for data collection.Finally, Java program and Cymru software tool were employed for data analysis part by which IP addresses were extracted and has been converted to Autonomous System Numbers (ASN). Two well-known P2P file sharing systems were utilized, namely, Bit Torrent and Top-BT.Software programming and simulation tools such as Java, AWK, Cymru were adapted to map IP's into AS numbers.MS-Excel has been used to show AS paths as final output by which the paths between Bit-torrent and TopBT are compared to measure their QoS and Locality in a P2P network. Results Locality awareness has emerged as the anchor to tackle the unwanted traffic issue where locality awareness algorithms allow peers to measure their distances from other nodes and utilizes this knowledge in selecting near content sources.To implement this algorithm, many issues must be tackled.For example, how to measure distances?How to find location?How to define near nodes and far nodes? Collecting underlying network measurements and utilizing this information is the way to answer the previous questions.Peer should have the ability to measure the AS hop count path to reach different peers and must have the ability to map IP addresses into their AS numbers that can be able to measure delay, bandwidth and loss in the path.Finally, they should have algorithms that utilize this information (Locality algorithm).The main objective of locality awareness studies is to construct a P2P system that satisfies the requirement of ISPs by reducing the hops count and increase the number of local source nodes in one way and on the other end there shouldn't be impact on quality of experience in P2P networks. Our results show that the average AS hops count path between neighbors in TopBT platform is shorter than the distances between neighbors in BT network.In addition, we have observed from our results that locality awareness implemented in TopBT has impact in reducing the intra-domain traffic that passes between AS's.Unfortunately, the implementation of locality awareness algorithm may reduce the performance, Quality of Service (QoS) and Quality of Experience (QoE), of P2P networks if the required content is unpopular. In other words, the popularity of file in P2P file sharing network may affect implementation of locality awareness algorithm which means that the locality awareness algorithm requires a popularity of files to increase the performance of P2P applications or it will decrease its normal performance.In our measurement study we have obtained our results for Autonomous System paths of sources and destinations on Inter Autonomous System level routing. In Figure 1 we have evaluated average Autonomous System paths of Audio files in which TopBT has better download rate than BitTorrent.Figure 2 and Figure 3 shows the AS source path comparison of Video files and Application files respectively whereas the Video files that contain large data size results in such case TopBT has performed good as shown in Figure 2. Finally, we note that Figure 4 show Document source average AS path files. In P2P networks, nodes act as client and server simultaneously.Figure 5 shows average AS paths of Audio destination files, in Figure 6 and Figure 7 average AS path destination files of Video and Application are shown in which the performance of BitTorrent is slight better than TopBT.However TopBT had good performance overall, whereas Figure 8 shows the average AS hops path of destinations of Document files to compare the performance of TopBT with BitTorrent. By observing these average AS paths of source files and destinationfiles figures respectively we can list out our findings, At first we noticed average AS hops paths of TopBT is shorter in most of the cases.However, in some cases this path is longer than BitTorrent.The reason is that the downloaded files in these cases are not popular, which means that there are no localized nodes near to download the file from.In this case TopBT attempts to download the files from faster nodes with highest upload bandwidth and hence this fact has increased the path for such scenarios.We can observe that the download time has reduced in many scenarios for TopBT, this shows that locality can also help in improving the Quality of Experience.Unfortunately, this is not the case for all files.TopBT attempts to reduce paths' length first which can affect the download time by downloading the file from closer procrastinate nodes. Conclusions & Future Work In this work, we conducted a measurement study to investigate the advantages and drawbacks of implementing locality awareness algorithm in P2P networks and examined the locality awareness algorithm in BitTorrent and TopBT.We have compared the performance of TopBT with BitTorrent and utilized Wireshark tool to collect information from P2P network.In addition, an AS graph has been constructed to implement a shortest path algorithm to measure AS hops count between nodes in which collected peers from BitTorrent have been used as input to measure their destinations. In future work, we can use other P2P applications and compare their results with each other and also investigate this measurement study in different locations and diverse ISP internet connections. P2P model will remain dominating in coming years as we believe that research and development will continue to adapt P2P overlays which are more suitable for current internet infrastructure.P2P systems evolution will provide insights into the development of other large-scale distributed systems. Figure 1 . Figure 1.Source autonomous system hops path for audio files. Figure 2 . Figure 2. Source autonomous system hops path for Video files. Figure 3 . Figure 3. Source autonomous system hops path for application files. Figure 4 . Figure 4. Source autonomous system hops path for documents files. Figure 5 . Figure 5. Destination autonomous system hops path for audio files. Figure 6 . Figure 6.Destination autonomous system hops path for video files. Figure 7 . Figure 7. Destination autonomous system hops path for application files. Figure 8 . Figure 8. Destination autonomous system hops path for documents files.
2018-12-18T07:39:11.083Z
2015-01-29T00:00:00.000
{ "year": 2015, "sha1": "367447d64f99e0a680dfe844cdf1a8a1194cbba0", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=53718", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "367447d64f99e0a680dfe844cdf1a8a1194cbba0", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
260887505
pes2o/s2orc
v3-fos-license
Reconstructing a bijection on the level of Le diagrams Lukowiski, Parisi, and Williams formulated the T-duality map of string theory at a purely combinatorial level as a map on decorated permutations. We combinatorially describe this map at the level of Le diagrams. This perspective makes the dimension shift under the map more transparent. The structure of the paper is as follows. We open with a background section, Section 2, where we define T-duality at the level of decorated permutations following [LPW23] in Section 2.3. In Section 3 we give a combinatorial reformulation of the T-duality map directly on Le diagrams, the main result being Theorem 3.7. In Section 4, we prove that the construction does indeed give the correct Le diagram, resulting in Theorem 4.3. In particular, we see in Theorem 4.1 how viewing T-duality on Le diagrams directly explains the dimensional relationship between the positroid cells on either side of the map. We briefly discuss some extensions in Section 5. Appendix A gives another formulation of our main construction as a row-by-row algorithm. Ontario through the Ministry of Economic Development, Job Creation and Trade. This research was also supported in part by the Simons Foundation through the Simons Foundation Emmy Noether Fellows Program at Perimeter Institute. B A C K G R O U N D The Grassmannian is a classical geometric object that has been extensively studied due to its nice structure and its many connections to different areas of mathematics, including combinatorics, algebraic and differential geometry, and representation theory. For algebraic combinatorialists, the interest lies in the beautiful combinatorics arising from its decompositions. Through the classic Schubert decomposition of the Grassmannian into Schubert cells, which can be indexed by partitions, we are led to some familiar combinatorial machinery such as Young tableaux, Schur functions, and Schubert polynomials. A standard reference for many of these topics is [Ful97]. From a different decomposition, as first described by Gelfand, Goresky, MacPherson, and Serganova in [Gel+87], we are led to matroids and matroid polytopes. Specifically, we can divide the elements of the Grassmannian based on which Plücker coordinates are non-zero, into matroid strata, also known as the Gelfand-Serganova strata. However, unfortunately, Mnëv's Universality Theorem [Mnë88] tells us that the structure of these matroid strata can be as complicated as any algebraic variety. Instead of looking at the full Grassmannian, Postnikov in [Pos06] initiated the study of a certain subset of the Grassmannian called the positive Grassmannian, by giving a combinatorial description of its cells which turned out to have a much nicer geometric structure. This opened the door to the extensive study of the positive Grassmannian, both combinatorially and through the multitude of emerging connections with other branches of mathematics and physics. Like how the Grassmannian could be subdivided into its matroid strata, analogously we can do the same with the positive Grassmannian. This gives us positroids and positroid cells, see Definition 3.2 of [Pos06]. In other words, we are partitioning the positive Grassmannian based on the Plücker coordinates that are strictly positive. This is called the positroid stratification of the positive Grassmannian. For us, the interest in positroid cells lies in their rich combinatorial structure, arising from the many families of combinatorial objects that Postnikov in [Pos06] showed index these cells. Beyond positroids, these objects include decorated permutations, Le diagrams, Grassmann necklaces, and equivalence classes of reduced plabic graphs. We will only need decorated permutations and Le diagrams. To give the connection between decorated permutations and cells of the positive Grassmannian, Postnikov used another object called the Grassmann necklace, which can be read off from the bases of the positroid, see §16 of [Pos06]. For decorated permutations, the result is as follows. Theorem 2.3 -(from Lemma 16.2, Theorem 17.1 of [Pos06]) Decorated permutations of [n] with k anti-excedances index the cells of the (k, n) positive Grassmannian, Gr ≥0 k,n , and we denote by S π the positroid cell indexed by π. Going back to Example 2.2, this means that π indexes the cell S π of Gr ≥0 3,8 . Decorated permutations are simple and succinct objects that encode positroids, but one property that they do not easily see is the dimension of the associated positroid cell. For this, we need the next family of combinatorial objects called Le diagrams. Definition 2.4 -(Definition 6.1 of [Pos06]) A L-diagram (or Le diagram), is a filling D of a Young diagram of shape λ with 0's and + 's such that D avoids the L-configuration: That is, no 0 has both a + above and to the left of it, which we refer to as the L-condition. For 0 ≤ k ≤ n we say that the L-diagram D is of type (k,n) if the shape λ fits inside a k ×(n −k) rectangle. An example of a L-diagram D is given in Figure 1a. In §20 of [Pos06], Postnikov gave two bijections between L-diagrams and decorated permutations, the first through associating a series of other objects (hook diagrams, networks, and plabic graphs) to L-diagrams. The second, which we describe here, uses an algorithm from §19 of [Pos06] going through a slightly different object called pipe dreams. Given a L-diagram D of type (k, n), we associate a decorated permutation π D on [n] as follows: (i) In D, we replace each 0 with a cross and each + with an elbow joint : (ii) View the south-east (SE) border of D as a lattice path with n steps, and label the edges with 1, . . . , n along this path from the top-right corner of the bounding k × (n − k) rectangle to the bottom-left corner of the bounding rectangle. (iii) Then label the n edges of the north-west (NW) border of D so that, viewing D as a grid, the rows and columns have the same labels (the opposite horizontal/vertical edges are labelled the same). We call this diagram P the pipe dream associated to D. (iv) To get the decorated permutation π associated to P and D, we follow the "pipes" of P from the SE border to the NW border. That is, π(i ) = j if the pipe starts at i and ends at j . A horizontal pipe starting and ending at i , where i labels vertical edges, is denoted a co-loop π(i ) = i . A vertical pipe starting and ending at i , where i labels horizontal edges, is denoted a loop π(i ) = i . To illustrate this algorithm, we use an example. Example 2.5 -Given the L-diagram D of type (3,8) in Figure 1a, we label the SE and NW borders with 1, . . . , 8 as above, and replace the 0's and + 's with crosses and elbow-joints. This gives the pipe dream P in Figure 1b, where arrows are added to show the direction to follow the pipes for the permutation. F I G U R E 1 : The L-diagram and pipe dream associated to Example 2.2. The arrows on the pipes indicate the direction to follow to read off the decorated permutation. To read off the decorated permutation, we follow the pipes in the direction of the arrows to get π D = 1 2 3 4 5 6 7 8 3 2 5 1 6 8 7 4 , which is the decorated permutation from Example 2.2. Observe that rows of all 0's in D correspond to co-loops in π D , while columns of all 0's correspond to loops. Pipes only ever move up or to the left, and avoiding the L-configuration in D corresponds to pipes in P crossing at most once, and furthermore, once two pipes cross they never subsequently share a box even without crossing (via an elbow joint). It turns out that this map is indeed a bijection. One of the reasons to look at L-diagrams is because of how easily they display the dimension of the positroid cell that they index. In particular, the dimension of the corresponding positroid cell is determined by counting the number of + 's in the L-diagram. Theorem 2.6 -(Theorem 6.5, Corollary 20.1, Theorem 20.3 of [Pos06]) The map D → π D is a bijection from L-diagrams of type (k, n) to decorated permutations of [n] with k anti-excedances. Therefore, L-diagrams of type (k, n) index the cells of the (k, n) positive Grassmannian, Gr ≥0 k,n . Furthermore, let S D be the positroid cell indexed by D. Then the dimension of the cell S D is the number of + 's in D. Going back to Example 2.5, this means that D indexes the cell S D of Gr ≥0 3,8 , which corresponds to the cell S π from Example 2.2, and the dimension of S D is 5 since there are 5 + 's in D. (2.3) T-duality on decorated permuations. The positive Grassmannian has deep connections to many different areas of mathematics and physics. We have already mentioned some of the mathematical connections, others include oriented matroids, polytopes and polyhedral subdivisions, non-crossing partitions, lattice paths, cluster algebras and quantum algebras. The positive Grassmannian can also be studied in the spirit of Schubert calculus, through varieties, flags, and symmetric functions. On the physics side, there have been applications to KP solitons, types of asymmetric exclusion processes, and scattering amplitudes (via the amplituhedron and Wilson loop diagrams). We refer the reader to [Pos06; Pos18; KW13; AH+16; AHT14; CW07; LPW23; PSBW23; AFY22a; AFY22b] and the references therein for further details. The aspect of the physics we are interested in is how T-duality from string theory, via the connection with the amplituhedron, then manifests itself combinatorially. Lukowski, Parisi, and Williams [LPW23] showed that at the level of decorated permutations, T-duality becomes the following very elegant map. Definition 2.7 -(Definition 5.1 of [LPW23]) The T-duality map from loopless decorated permutations on [n] to co-loopless decorated permutations on [n] is defined as π −→π (a 1 , a 2 , . . . , a n ) −→ (a n , a 1 , . . . , a n−1 ) where the permutations are written in one-line notation, and any fixed points inπ are declared to be loops. That is, for a given loopless π, we haveπ(i ) = π(i − 1) where all fixed points are loops, and we call π the T-dual decorated permutation. Equivalently, the T-duality map is a bjiection between loopless positroid cells S π of Gr ≥0 k+1,n and coloopless positroid cells Sπ of Gr ≥0 k,n . Furthermore, we have the following dimensional relationship While this dimensional relationship was proven to exist between T-dual loopless cells of Gr ≥0 k+1,n and co-loopless cells of Gr ≥0 k,n , it was not clear where it was coming from. The desire to better understand this relationship motivated the question of viewing T-duality as a map on L-diagrams, in which one very easily sees the dimension of the positroid cells they index. The key result of [LPW23] (Theorem 6.5) is that the T-duality map provides a bijection between BCFW tilings of a particular hypersimplex, ∆ k+1,n , and BCFW tilings of the particular amplituhedron A n,k,2 . After the original appearance of our work in [Hu21], Parisi, Sherman-Bennett, and Williams [PSBW23] extended the work done in [LPW23] and proved the main conjecture in [LPW23] which strengthens the bijection to all positroid tilings. Their proof utilized looking at the T-duality map via plabic graphs and plabic tilings. Together with our result, then, the T-duality map has a direct combinatorial formulation for decorated permutations, Le diagrams and plabic graphs. See Section 5 for further discussion. T-D U A L I T Y O N T H E L E V E L O F L E D I A G R A M S The direction of T-duality we will look at isπ → π, whereπ is a co-loopless permutation on [n] with k anti-excedances and π is a loopless permutation on [n] with k + 1 anti-excedances. On L-diagrams, we are, thus, looking at the map fromD → D, whereD is the L-diagram associated toπ and D is the L-diagram associated to π. We will define our map by showing how to take each column ofD that contains at least one + and convert it into an explicit configuration of + 's which when glued together characterize D (see Definition 3.7). This gives an explicit, combinatorial form for the T-duality map on L-diagrams. In Appendix A we give an alternate description of this map which acts row-by-row and is more algorithmic. Throughout the rest of this paper, we will refer to the rows and columns of a L-diagram D by the same labelling as that which gives the associated decorated permutation (i.e from its pipe dream). Boxes in D, and the k × (n − k) rectangle, will then be referred to by their coordinates (i , j ) under this labelling. A box is considered as "existing" if it is a valid box to be filled within the shape λ. Note that all such valid boxes have i < j . An example of this notation is given in Example 3.1. Finally, since we are dealing with two L-diagrams, one on each side of the map, we refer to the "corresponding" row/column as the row/column with the same label on the opposite side of the map. We will also order boxes in columns from top to bottom and in rows from right to left, in accordance with labels going from smallest to largest. For rows/columns, having i < j would mean i is to the right and/or above j . For boxes, "first" refers to the right/top-most in a row/column, "last" refers to the left/bottom-most in a row/column and "next" refers to the next box to the left/below in a row/column. (3.2) Preliminaries and shape of D. We start with a co-loopless decorated permutationπ on [n] with k anti-excedances, which we denote in two-line notation asπ = 1 2 · · · n a n a 1 · · · a n−1 . Let {b 1 , . . . , b k } be the k anti-excedances ofπ ordered such that b 1 < · · · < b k (recall that b u is an antiexcedance ifπ −1 (b u ) > b u , where we can't haveπ(b u ) =b u sinceπ is co-loopless). In particular, let i u =π −1 (b u ) be the position of b u . Then we have b u =π(i u ) = a i u −1 for 1 ≤ u ≤ k. We denote the associated L-diagram byD with shapeλ. Sinceπ is co-loopless, we have that every row inD has ≥ 1 + 's. In particular, and everyλ u ≥ 1. We also have that the rows ofD are thus labelled by these b u , as in the left diagram of Figure 2. Note that there are no rows of size 0 asπ is co-loopless. We want to produce a L-diagram D of type (k + 1, n) and dimension dim(SD ) − 2k + (n − 1) with associated decorated permutation π of [n] given by π = 1 2 · · · n a 1 a 2 · · · a n , which has k + 1 anti-excedances and is loopless. The loopless condition means that D has to have ≥ 1 + 's in each of the n − (k + 1) columns. To determine the shape λ of D, which only depends on the anti-excedances of π, consider the following: a n is not an anti-excedance ofπ sinceπ is co-loopless,π(1) = a n and 1 ̸ > a n . In particular, a n is the label of a column inD. -Under T-duality, b u for 1 ≤ u ≤ k stays an anti-excedance as i u > b u =π(i u ) = π(i u − 1). Since π is loopless, we have either π(i u − 1) < i u − 1 or π(i u − 1) = i u − 1 where i u − 1 must be a co-loop. In either case, b u is an anti-excedance of π. a n is always an anti-excedance of π since π is loopless. -There are no other anti-excedances of π since π(i ) =π(i + 1) Based on these observations, we have that {b 1 , . . . , b k } ∪ {a n } are the k + 1 anti-excedances of π. In particular, the labels of the rows of D (including rows of length 0) are thus exactly the same asD, with the addition of a n . Then, the shape of D can be constructed fromD by removing the column labelled a n and inserting in a row labelled a n in the appropriate position, maintaining the order of the labels of the new boundary lattice path, see Figure 2. Steps to construct the shape of D fromD. Remove column a n fromD and insert a row labelled a n where the dashed line is, making sure that the new boundary path is in the correct order. Here j is the index such that b j −1 < a n < b j . The shape λ of D is then (including 0 sized parts) and j is the index such that b j −1 < a n < b j . If a n > b u for all 1 ≤ u ≤ k, then let j = k + 1. We have 0 ≤ λ u ≤ n − (k + 1) for all 1 ≤ u ≤ k + 1, and at most k + 1 non-zero parts, as needed. Thus the order of rows (and anti-excedances) of D is {b 1 , . . . , b j −1 , a n , b j , . . . , b k }, as in the rightmost diagram of Figure 2. When a n = n, we would have a row of size 0 (a co-loop) labelled by n. Note that we always have either b 1 = 1 or a n = 1, and thus the first row of D will be labelled with 1 and is a full row of size λ 1 = n −(k +1). Our construction of D is built of 2 distinct shapes; what we will call L-shapes and strings of + 's. To create the L-shapes, we will look at columns ℓ ̸ = a n inD with at least one + . We refer to + 's inD that are not a leftmost (last) + in its row as non-last. Definition 3.2 -Let ℓ ̸ = a n be a column inD with at least one + . Let ( f , ℓ) be the first (topmost) + in this column and (g , ℓ) be the last (bottom-most). To each such ℓ, we place in D a corresponding shape which we call an L-shape, specified as follows. For the indices: -Let m, possibly equal to ℓ, be the right-most column such that all columns strictly between ℓ and m − 1 inD are of the same height as column ℓ and are filled with only 0's. -Let b B = max{g , a n } if a n < ℓ, otherwise let b B = g . Otherwise, let h be the first row above f with a + to the left of column ℓ inD, if such a row exists, and then let b T = max{a n , h} when a n < f and let b T = h when a n > ℓ. Note that h always exists when a n > ℓ asD has a + at (1, a n ). For the + 's: -For the vertical part: or if there is a last + but b is a row with a + in column a n , for rows b from f to g inD. Fill all other boxes of the L-shape with 0s. Remark 3.3 - -The horizontal string of + 's corresponds to consecutive columns of all 0's inD to the right of any column with at least one + . Notice that these L-shapes cover those columns of D which correspond to columns ofD (ignoring a n ) with at least one + as well as columns of all 0's to the right of these and of the same height. What's left are the columns of all 0's which are to the left of any columns of the same height that contain + 's. We cover these columns by strings of + 's. Definition 3.4 -For consecutive columns of all 0's inD of the same height and preceding any column of the same height containing a + , we will place in D a corresponding string of + 's + · · · · · · + and these will be the only + 's in these columns. We leave the specification of which row these strings of + 's will go to Definition 3.7, where it becomes clearer how the two shapes associated to columns inD glue together to form D. These strings of + 's behave like the horizontal part of an L-shape. With appropriate conventions for the indices of Definition 3.2 we can view them as degenerate L-shapes without the vertical part, but this becomes more intricate than simply considering them separately. To give some intuition about what these shapes are doing, the idea here is that: -The vertical part of an L-shape has almost the same configuration of + 's as the + 's inD in that column ℓ with two exceptions. The first exception is that the vertical part of the L-shape is expanded to either include the new row a n or to a row directly above whose last + has not yet passed, and adding a + in that new row. The other difference is we instead place 0's in the L for last + 's in column ℓ inD in rows without a + in column a n . -Then the horizontal part of L-shapes start from a column with at least one + inD and extend to the right with a string of + 's to the previous column of + 's inD or the start of a row. -Strings of + 's not in the L-shapes extend to the left until the start of the next row below, and to the right until a column fromD with at least one + . From Definitions 3.2 and 3.4, we immediately get the following statement. Proposition 3.5 -There is at least one + in every column of D. More specifically, (i) there is exactly one + in columns corresponding to those inD with no + 's, and (ii) there are s ℓ + t ℓ + 1 + 's in columns ℓ corresponding to those inD with at least one + , where in column ℓ inD there are s ℓ non-last + 's and t ℓ last + 's in rows with a + in column a n . Now that we have our building blocks, what's left is to piece them together. How these shapes are glued together is even nicer than one might first guess from the indices in Definition 3.2. We will use this to define in which rows the strings of + 's from Definition 3.4 appear and prove a nice characterization of how the L-shapes glue. First we need a definition. Definition 3.6 -Given a Young diagram D of shape λ = (λ 1 , · · · , λ k ), we partition D into k rectangles called sections where each section has dimension j × (λ j − λ j +1 ) for 1 ≤ j ≤ k, and is bounded by two rows (one possibly empty). We let We name each section by its last row and include empty sections in the count of k (when some λ j = λ j +1 ). Definition/Theorem 3.7 -D is made up of gluing together the L-shapes of Definition 3.2 and the strings of + 's of Definition 3.4 where in each (non-empty) section b, we have a chain of shapes in the following form: b + +· · · + · · · + +· · · + + · · · · · · + where the bottom right of each shape is glued to the last + in the vertical part of the previous shape, except the first which is glued to row b. More precisely, we have the following properties for each section: -There are either none or exactly one string of + 's in the chain, and if there is one it is leftmost in the chain. The other shapes in the chain are all L-shapes. There may be 0 L-shapes, but there must be at least one shape total in the chain. Proof. Putting together Definitions 3.2 and 3.4, D is made up of the following two types of blocks, one consisting of L-shapes where ℓ corresponds to a column with at least one + inD, and the other consists of strings of + 's which corresponds to consecutive columns of all 0's inD. In the blocks, the shaded regions represent boxes filled with all 0's, which extends to fill the rest of the rows in columns m to ℓ in the shape λ. The upper and lower lines represent the border of λ. As every column in D was originally a column inD and the new row a n is accounted for in the L-shapes, these blocks fit in and cover each column of the shape of D. Thus each section of D is made up of a non-zero chain of L-shapes and potentially one string of + 's. Observe the following: -If there is a column with at least one + in section b ofD, then there will be a first (right-most) Here ℓ 1 is the first column of + 's inD in section b. -When section b inD consists of columns of all 0's, then the string of + 's extends to fill the whole bottom row b of the section: b + · · · · · · · · · + -If the column directly before the next row below b has at least one + inD, then there is no string of + 's in section b. Otherwise, there is exactly one string of + 's in section b. What we need to show is how the shapes glue together. Consider a particular section b. First, we already saw the special case when a section contains only columns of all 0's, we can assume there is at least one L in section b. We also saw that the first L in the section will always glue to the beginning of the row. The desired gluing rule is the definition for the gluing of the string of + 's, so all we need to show is how L's glue on the left to other L's. Consider an L-shape with vertical part in column ℓ (or with no vertical part) and has right-most and bottom-most box at (b B , m). We want to see where b B is in relation to the previous L, with vertical part in column m − 1 and say in rows i to j : By construction of b B , either b B = a n or b B is a row such that inD there is a + in box (b B , ℓ) with all 0's below it in column ℓ. In the latter case, by the L-condition onD, not only are there only 0's below (b B , ℓ), but all boxes to the left of those 0's must also be 0's. Therefore, if we look at column m − 1 inD, any + 's below row b B must be the last + 's of their row. Additionally, these rows all do not have a + in column a n as by definition of b B we have either a n < b B < m − 1, or a n > ℓ > m − 1 but in the latter case as we already observed there are no + 's after column m − 1. Consequently, the L-shape for column m − 1 will have all 0's in its vertical part below row b B . Now asD has a + in box (b B , ℓ) and sinceD is a L-diagram, either there is a non-last + at (b B , m − 1), in which case the L-shape for column m − 1 also has a + there, or the first + in column m − 1 ofD is strictly below b B say in row f , in which case the L-shape for column m − 1 will have row i = b B (by the condition on h in Definition 3.2 for column m − 1). If b B = a n , by construction there must be a row g < a n such that inD there is a + at box (g , ℓ) with all 0's below and, as before, all boxes to the left of those 0's must also be 0's. By a similar analysis, the L-shape for column m − 1 will have all 0's in its vertical part for rows below a n . Now inD column m − 1, if the first + occurs in a row > a n or if the last + occurs in a row < a n then by construction the L-shape for column m − 1 will have i = a n or j = a n respectively. Otherwise, we have that i < a n < j . Regardless, the L-shape for column m − 1 will thus have a + in box (a n , m − 1). In both cases we get the following relation (where j could be equal to b B ): Thus we have that L-shapes, and therefore also strings of + 's, glue to the left of L-shapes at the last (bottom-most) + in the vertical part of the L. ■ (3.5) An example. We illustrate this construction through an example shown in Figure 3. To briefly check that this D is indeed the right diagram for the givenD: -D is a L-diagram of type (5, n) = (k + 1, n) (notice it avoids the L-configuration and has 5 rows). -Every column has at exactly one + except columns p, r, s, u, which have two + 's. Thus D is loopless and of dimension dim(S D ) = (n −5)+4 = n −(k +1)+k = n −1 (recall this is the number of + 's), as there are n −5 columns and 4 columns with an extra + . This gives the correct relation since we wanted dim SD − 2k = dim(S D ) − (n − 1), recalling that dim(SD ) = 8 = 2k. -One can check that the associated decorated permutation π to D, in two-line notation, is where the · · · denotes π(a) = a + 1 for all a in-between the explicitly written values. Notice π is loopless and has 5 anti-excedances (circled). Shifting the bottom-line one to the right (and wrapping around) exactly givesπ, the decorated permutation associated toD. (3.6) D is indeed a L-diagram. To begin our proofs that this construction is well defined and agrees with T-duality, we first verify that under this filling D avoids the L-configuration and thus is a valid L-diagram. Theorem 3.8 -Under this filling, D is a L-diagram of type (k + 1, n). Proof. By the construction of the shape of D in Section 3.2, we have that D is of type (k + 1, n). We just need to show that there cannot be a L-configuration in D. Suppose in D we have the following where the boxes indicated by the dots are filled with 0's. As Theorem 3.7 tells us that L-shapes and strings of + 's are glued together such that the horizontal part of a shape is glued to the last + in the vertical part of an L (or to a row), without loss of generality we only need to consider when both + 's are in vertical parts of L-shapes. By the construction of the L-shape for column m, there are two cases (i) either j = a n (ii) or j ̸ = a n inD there must be a + in box ( j , o) for some column o ≥ m > ℓ. First suppose j = a n . Since there is no + at (a n , ℓ) in the L-shape for column ℓ, we must be in the case that a n is not between the b T and b B for that L-shape and since there is a + at (i , ℓ) this means that the whole L-shape for column ℓ must be above a n . In particular, inD the last + in column ℓ must be in a row i . Consequently, by the choice of b B for the L-shape at column ℓ, we must have ℓ < a n . But this is impossible as then there could not have been a box at (a n , ℓ) in D. Now suppose we are in case (ii). By the construction of the L-shape for column ℓ, inD there must also be a 0 in box ( j , ℓ) (as it could not have been a last + in its row as there is still a + at ( j , o) with o > ℓ) and the first + in column ℓ must be in some row f < j (as otherwise the choice of b T would mean there is no + in box (i , ℓ) in D). But now we are done as there is a L-configuration inD at ( j , o), ( j , ℓ) and ( f , ℓ), which is impossible asD is a L-diagram. ■ P R O O F O F L E D I A G R A M T-D U A L I T Y Finally, we show that the construction described in Section 3 gives the correct L-diagram and hence is the T-duality map. We have that D is a L-diagram of type (k + 1, n) from Theorem 3.8. It remains to show that D is loopless, has the right dimension, and has π as its associated decorated permutation. A benefit of our approach is that we can clearly see how the dimension of D arises in relation to the dimension ofD. Namely, it comes from having at least one + in every column of D, with the number of additional + 's being exactly the number of non-last + 's inD. Theorem 4.1 -Under the filling described in Section 3, D is loopless and has dimension Proof. For this proof, we only need Proposition 3.5 and refer to (i) and (ii) from that statement. First, it follows automatically from having ≥ 1 + in every column of D that D is loopless. For the dimension of S D , (i) and the +1 in (ii) gives one + in every column of D for a total of n − (k + 1) + 's. Summing up what's left over in (ii), using the s ℓ and t ℓ notation from Proposition 3.5, gives: ℓ s ℓ + t ℓ = # of non-last + 's inD − # of non-last + 's in col. a n ofD + # of non-last + 's in col. a n ofD = # of non-last + 's inD = dim(SD ) − k where the sums run over columns ℓ ̸ = a n inD with at least one + . The equalities come from: -Summing over s ℓ gives the number of non-last + 's in each column inD excluding column a n . -Summing over t ℓ gives the number of rows with a + in column a n inD with last + 's not in column a n , which in other words is the number of non-last + 's in column a n . -AsD is co-loopless and there are k rows all with at least one + , there are k last + 's out of a total of dim(SD ), which gives us the last equality. Theorem 4.2 -Under the filling described in Section 3, the associated decorated permutation to D is π = 1 2 · · · n a 1 a 2 · · · a n , where π is loopless and has k + 1 anti-excedances. Proof. Since D is loopless by Theorem 4.1 and is a L-diagram of type (k + 1, n) by Theorem 3.8, its associated decorated permutation π is also loopless and will have k + 1 anti-excedances. Now, what we want to prove is that π(i ) =π(i + 1) = a i ̸ = i + 1 for non-fixed points i + 1 ofπ i + 1 for fixed points i + 1 ofπ for i ̸ = n π(n) =π(1) = a n To get from D to its decorated permutation, we go through its pipe dream (see Section 2.2), that is starting from the label i on the SE border, we follow its pipe until it reaches a label j on the NW border of D, indicating that π(i ) = j . We will refer to the pipe starting at i as the corresponding path for π(i ), or just for i , where the turns are indicating where the + 's are. In the following figures, the shaded areas indicate boxes filled with 0's. First, for the easiest case of π(n), we look at the possibilities of what n corresponds to in D: n n (a) n is a row in D a n n + (b) n is a column in D and a column of all 0's inD + + · · · + + a n n (c) n is a column in D and a column with at least one + inD In case (a), asD is co-loopless, n must be a column inD and since the only change in rows and columns is through a n , we have π(n) = n = a n as needed. In case (b), Theorem 3.7 tells us there is a horizontal string of + 's at the end of diagram D either glued to a row, in which case this row must be a n asD is co-loopless, or to the vertical part of the last L in D. Now each + in this vertical part of the L comes either from a last + inD in a row with a + in column a n , or it's an extra + in row a n . However for a row b to have a + at (b, a n ) inD, we must have a n > b. That is, this L has its bottom-most + in the vertical part in row a n . In either case, the + in column n in D is in row a n giving π(n) = a n as needed. Finally in case (c), Definition 3.2 places in D an L-shape whose vertical part is in column n. Once again as this is the last L in D, with the same argument as in case (b), its bottom-most + in column n is in row a n and thus π(n) = a n . Similarly, for the case where i + 1 is a fixed point ofπ for i ̸ = n, so i + 1 ̸ = a n is a column of all 0's inD, we look at what i corresponds to in D: we have a + in box (i , i + 1) in case (a) or a string of + 's in columns i , i + 1 and in the same row in case (b). Either way we get π(i ) = i + 1 as needed. In case (c), Definition 3.2 places an L-shape with vertical part in column i in D. By Theorem 3.7, whichever shape is containing the + in column i + 1 must glue to the left of this L at the last + in column i , say at (b L , i ). Thus we get + 's at (b L , i ) and (b L , i + 1) with all 0's below in column i and above in column i + 1, which gives π(i ) = i + 1 as needed. Lastly, we have the hardest case of when i + 1 is not a fixed point ofπ for i ̸ = n, that is inD either i + 1 is a column with at least one + or i + 1 is a row. Sayπ(i + 1) = j for some j ̸ = i + 1 (we also have j ̸ = a n ). We want to show that π(i ) = j . In the subsequent proofs, Definitions 3.2 and 3.4 and Definition/Theorem 3.7 will be used without explicit citation. Note that in general, every path inD for a non-fixed point i + 1 must start with one + in row/column i + 1 and then is built from alternating between two + 's in the same row and two in the same column, or vice versa, until row/column j where the path ends with one + in row/column j . Consider the following path inD which goes through m + 1 columns of + 's, where ℓ j denotes the columns of + 's and b j denotes the rows of + 's. Here, note that i +1 > b 1 > · · · > b m and ℓ 1 < · · · < ℓ m+1 . If instead i +1 is a row, there is an additional + in the first column of + 's at (i +1, ℓ 1 ) where i + 1 > b 1 . If instead j is a row, there is an additional + in the last column of + 's at ( j , ℓ m+1 ) where j < b m . Since the shaded areas in the path are filled with all 0's, by the L-condition forD, the vertical shaded regions extend to the left until the end of the diagram and the horizontal shaded regions extend to the top of the diagram. Now to tackle the problem at hand, we split the pathsπ(i + 1) inD based on its relation to a n and consider them separately. As an example of the different types of paths, see Figure 6. (i) i + 1 < j < a n : the path inD is completely to the right of a n and j is a column. (ii) j < i + 1 ≤ a n or i + 1 ≤ a n < j : the path passes through column a n and thus either the last + in the path is before a n (and j is a row), or the path contains + 's in column a n . (iii) j < a n < i + 1 or a n < i + 1 < j : the path passes through where row a n in D will be. (iv) a n < j < i + 1: the path is completely below where row a n in D will be and j is a row. (iv) a n F I G U R E 6 : Examples of the four types of paths inD considered in the proof of π. Each path is labelled by its type. On the left, the rectangle indicates where the column a n is inD. On the right, the dashed line indicates where the row a n will be in D. Note all the paths in (ii) with + 's to the left of a n must contain + 's in column a n because of the L-condition forD, since we know there is a + at (1, a n ) whenever a box exists there. In the special case of a n = 1, paths in (i) and (ii) do not exist. Starting with (i) and (iv), we are in the cases where inD we have, respectively: a n j = ℓ m+1 and either i + 1 = ℓ 1 is a column or i + 1 > b 1 is row with a + at (i + 1, ℓ 1 ). Notice that the top + in each column in the path, except possibly ℓ m+1 , is not the last + in their respective rows inD. In the figure above, these are the circled + 's. This implies that in the L's corresponding to these columns in D, there is also a + in these positions, (b 1 , ℓ 1 ), . . . (b m , ℓ m ); these + 's are circled in the figures below. For cases (i) and (iv), since a n > j or a n < j , all L-shapes corresponding to columns ℓ 1 , . . . , ℓ m , and for (i) also column j , each have their b T = h ̸ = f , a n where f is the row of the first + inD in that column. Now let's look at two consecutive columns ℓ s , ℓ s+1 , for s ̸ = m. We know that any + 's in column ℓ s strictly between rows b s , b s+1 must be last + 's in their row because of the L-condition forD and thus these correspond to a 0 at (b, ℓ s ) in D. If there are + 's in column ℓ s above b s+1 inD, then by the L-condition there must be a + at (b s+1 , ℓ s ) inD. In particular, this is not the last + in row b s+1 and thus there is a + at (b s+1 , ℓ s ) in D. If the first + in column ℓ s inD is in row b s+1 or below, since there are no + 's in rows b for b s > b > b s+1 after column ℓ s by the L-condition for D applied to row b s , then row b s+1 is the first row above b s that has a + to the left of column ℓ s . That is, we have that b T = b s+1 (by the h-condition in Definition 3.2) for the L in column ℓ s and thus there is a + at (b s+1 , ℓ s ). In either case, the L in column ℓ s in D always has a + at (b s+1 , ℓ s ) and at (b s , ℓ s ), with 0's between. Since in D the L in column ℓ s+1 glues to the left of the L in column ℓ s at its bottom-most + , there must be 0's in row b s+1 strictly between columns ℓ s and ℓ s+1 . Therefore, we get the above figure on the right. For the final columns ℓ m , ℓ m+1 we look at the two path cases separately. For (i) and column j = ℓ m+1 , notice that inD, there must be a row b < b m with a + to the left of column j since we know at the least there's a + at (1, a n ). Since we know that (b m , j ) is the first + in column j and that b T ̸ = j , a n for the L in column j in D, we must have that b T = b and thus there is a + at (b, j ) with 0's all above in D. For column ℓ m , using the same argument as for ℓ s with row b as b s+1 , in the L in column ℓ m , there must be a + at (b, ℓ m ) and at (b m , ℓ m ) with 0's in-between. For (iv) and columns ℓ m , ℓ m+1 , we have the same analysis as for ℓ s , ℓ s+1 except that now the + at ( j , ℓ m+1 ) is the last + of its row inD, which is a row without a + in column a n , and thus corresponds to a 0 at ( j , ℓ m+1 ) in the L-shape for column ℓ m+1 . Now, since we know that L's glue to the left of previous L's at the first + from the bottom, we get the following in D, regardless if i + 1 is a column or a row: Therefore, the path that starts at (b 1 , ℓ 1 ) ends at column ℓ m+1 for paths in (i) or row b for (iv) respectively and hence in either case ends at j . Finally to look at what happens with i in D. In case (a), regardless of how many + 's are in column i inD, we must have in D a + at (b 1 , i ) and 0's below in column i since we know how L's glue. In case (b), asD is co-loopless and by the L-condition forD, we must have that b 1 = i (when i + 1 is also a row, recall that there are + 's at (i + 1, ℓ 1 ), (b 1 , ℓ 1 ) in D). In case (c), notice b 1 < i and is a row inD with its last + to the left of column i . Then in the first column on or to the right of i with at least one + inD, any + 's below row b 1 must be last + 's in their row by the L-condition forD. Thus the corresponding L to this column would have its bottom-most + in row b 1 by the L-condition for D. If there are no such columns, asD is co-loopless, row b 1 must be the bottom-most row of column i . In particular, by these properties and as we know how shapes glue, whichever shape is in column i in D must have a + at (b 1 , i ) with all 0's below in column i . In all the cases, we can connect these path starts at the circled point to the rest of the path, and thus we get that the corresponding path to i in D leads to j , giving us π(i ) = j as needed for paths in (i) and (iv). Now consider cases (ii) and (iii). For paths in (ii), first notice we already have the case when the last + in the path is in a column before a n and j is a row. In this case, all the arguments from paths in (i) stay the same except for the path end for which we can use the argument from paths in (iv). When the path in (ii) contains + 's in column a n or in (iii) it passes through where row a n will be, we have inD respectively: ℓ m+1 + + + · · · + + · · · + + + ℓ 1 ℓ 2 · · · a n = ℓ s * * * rows with a + in column a n (ii) or D (iii) ℓ m+1 + + + · · · + a n + · · · + + + where for (ii) ℓ s = a n and for (iii) ℓ s is the column whose + 's straddle the row that will be a n , and where * is indicating + 's in column a n that are not part of the path and the dashed line refers to where row a n will be in D. We also have either j = ℓ m+1 is a column or j < b m is row with a + at ( j , ℓ m+1 ), and either i + 1 = ℓ 1 is a column or i + 1 > b 1 is row with a + at (i + 1, ℓ 1 ). Then notice for columns ℓ 1 , . . . , ℓ s−1 , the analysis from paths in (i) and (iv) stay exactly the same. For columns ℓ s+1 , . . . , ℓ m+1 , we note that all rows b s , . . . , b m , and j if it is a row, must have + 's in column a n inD by the L-condition as there is a + at (1, a n ). In particular, this means that the L's corresponding to columns ℓ s+1 , . . . , ℓ m+1 , which are all > a n , have + 's exactly in the same positions as inD in rows < a n . For paths in (ii), we are done with all the relevant columns as to get D, column a n = ℓ s is removed and we know how the L's are glued. For paths in (iii), we need to look at what happens to column ℓ s , which row a n will pass through, and column ℓ s−1 . For column ℓ s , since row a n will occur strictly between rows b s and b s−1 , there will be a + at (a n , ℓ s ) in the L corresponding to column ℓ s . Since row b s has a + in column a n inD, with ℓ s > a n , there is also a + at (b s , ℓ s ) in the L. For column ℓ s−1 , we already have a + at (b s−1 , ℓ s−1 ) in its corresponding L. Now, if the first + in column ℓ s−1 inD is in a row < a n then the corresponding L will contain row a n and thus there will be a + at (a n , ℓ s−1 ). Otherwise, if the first + is at (b, ℓ s−1 ) for b s−1 ≥ b > a n inD then the corresponding L will have b T = a n , and a + at (a n , ℓ s−1 ), as by the L-condition forD there cannot be any rows inbetween b s−1 and a n with a + to the left of column ℓ s−1 . Additionally, this means any + 's at (b, ℓ s−1 ) inD, for b s−1 > b > a n must be last + 's in its row, which does not have a + in column a n inD, and corresponds to 0's in the L for column ℓ s−1 . That is, the L in column ℓ s−1 always has + 's at (a n , ℓ s−1 ) and (b s−1 , ℓ s−1 ) with 0's between. Thus we get the following in D regardless if i + 1 or j is a column or a row: For the path ends, as columns after and rows above a n are following the original paths fromD, when j = ℓ m+1 is a column, and a n > b m , there are only 0's above row b m in column ℓ m+1 in D and thus the path π(i ) turns up after (b m , ℓ m+1 ). When j = b < b m is a row, there is an extra + at ( j , ℓ m+1 ) and all 0's to the left in row j in D, and thus the path π(i ) turns up at (b m , ℓ m+1 ) and then to the left at ( j , ℓ m+1 ). In the special case of a n < b m where j = ℓ m+1 , since we know that the + at (b m , j ) is the first in its column inD, the L in column j in D must have its b T = a n with a + at (a n , j ). Column ℓ m stays the same as before with + 's at (a n , ℓ m ) and (b m , ℓ m ), and 0's between, in D and so we get: In all of the cases, the portion of the path π(i ) starting at (b 1 , ℓ 1 ) will end at j in D as needed. Finally, what's left is to look at the part starts. The analysis on i + 1 and i stays exactly the same as before (since it only depends onD), with the slight exception for paths in (ii) when i + 1 = ℓ 1 = a n , and for paths in (iii) when a n > b 1 . When i + 1 = ℓ 1 = a n , the only difference is that the circled + in the analysis for i in D is now the + at (b 1 , ℓ 2 ), and we consider i + 1 to be a row. Thus we get that π(i ) = j as needed for paths in (ii). In the special case of a n > b 1 where i + 1 = ℓ 1 , now the L in column ℓ 1 in D will have a + at (a n , ℓ 1 ), since ℓ 1 > a n and thus its b B = a n . There is also still a + at (b 1 , ℓ 1 ), as there's a non-last + there inD. Now a n plays the role of b 1 and we use the same arguments as before, even if i = a n in which case we consider i to be a row. In all cases, we get that π(i ) = j as needed for paths in (iii). Putting (i)-(iv) together, we are now done since for all pathsπ(i + 1) = j we get π(i ) =π(i + 1) = j . ■ With the proof that D gives the correct decorated permutation complete, we now put everything together. Theorems 3.8, 4.1 and 4.2, tell us that the construction given in Section 3 is the T-duality map (see Section 2.3) on the level of L-diagrams. That is, we have given the T-duality map fromπ → π on the level of L-diagrams. D I S C U S S I O N We gave an explicit construction for the T-duality map at the level of L-diagrams. It is a particularly convenient construction for understanding the change in dimension under T-duality. Specifically, we see that we get at least one + in every column of D, with the number of additional + 's being exactly the number of non-last + 's inD, which is exactly the expected relationship, see Section 4.1. There are a couple of extensions and other perspectives that one can consider, which we outline here. We can also see the inverse map from this perspective. The shape ofD gives the shape of D by reversing Figure 2. For the filling, by the second point of Remark 3.3, the vertical part of each L shape has height at least 2 and so any L-diagram D with at least one + in every column can be decomposed uniquely into L shapes with potentially also one string of + 's on the left side of any section. Then from the positions of the + 's in the vertical part of each L the positions of the + 's can be read off, reversing the original construction of the L's (Definition 3.2). We can also consider iterating the T-duality map. If D is co-loopless, that is π has no fixed points, then the construction in Section 3 can be applied again, this time starting with D, to obtain a L-diagramD whose associated decorated permutation would bē π = 1 · · · n − 1 n a 2 · · · a n a 1 . That is from the originalπ, the permutation is shifted to the left twice. We know that the shape of D corresponds to removing the column a 1 = π(1) from D and adding in a row labelled a 1 in the appropriate place. For the filling ofD, from Theorem 3.7 we know that we really only need to look at what happens to the L-shapes. Applying Definition 3.2 to an L-shape gives ←− could be + or 0 + + · · · + + · · · · · · + where now every + in the horizontal string of + 's turns into its own L, just with no horizontal part. As a purely combinatorial map on decorated permutations, T-duality (Definition 2.7) also appears as a special case of a more general operation studied in [BCTJ22] (the A = case of Definition 23; note their decorations for fixed points are used opposite of ours). For this map, looking at the left-shiftπ → π direction, nowπ no longer needs to be co-loopless and we can specify a set A of positions i to "freeze", i.e. for i ∈ A, π(i ) =π(i ). For our L-diagram construction, co-loops i inπ can be dealt with as if they were originally loops. The only change is in the shape of D as now i is a column where previously i was a row inD. For freezing fixed points i ̸ = 1, n, which are then decorated as co-loops in π, the operation turns into simply adding to D a row i , in the correct position, of all 0's (and if it was previously a column, removing column i ). The more interesting operation would be freezing non-fixed points or when positions 1 or n are frozen. Finally, going back to the motivation for studying the T-duality map, one of the key ingredients in [PSBW23] for proving the correspondence between tilings of the hypersimplex and of the amplituhedron came down to looking at T-duality as a map on plabic graphs. Formulated in the original π →π direction, this map turned out to be a particularly nice construction on graphs, similar to that of taking a dual, see Definition 8.7. From our L-diagram construction, and via its bijection to plabic graphs (see §20 of [Pos06]), we can already see how the horizontal strings of + 's correspond to black lollipops (loops inπ). As each + in the string maps to a trivalent white vertex, the entire horizontal string corresponds to a sequence of connected white vertices, each also connected to the boundary of the plabic graph. Thus these white vertices are enclosing boundary faces. By Definition 8.7 in [PSBW23], a black vertex is placed in each of these faces and only connected to the boundary giving the black lollipops as needed. What would be interesting is if we can directly interpret the rest of the T-duality on plabic graphs map via our L-shapes. A P P E N D I X A. A L G O R I T H M I C R O W -B Y -R O W R E F O R M U L A T I O N We can reformulate our characterization of the T-duality map at the level of L-diagrams by writing an explicit algorithm that acts row by row. Using definitions and notation from before, the shape of D is still as described in Section 3.2. It remains then to fill D with 0's and + 's. We will fill D row by row, from right to left, based on the corresponding row and rows below inD. For row a n , since a n was a column inD, we will consider its "corresponding row" inD as a row of all 0's of the same length as in D and placed in the same position as D (in-between rows b j −1 and b j so that the order of labels of the SE border is maintained). Let L u be the column containing the leftmost + in each row labelled b u ofD (which is in box (b u , L u )) for 1 ≤ u ≤ k. We'll take the convention that L n > n as with this convention the algorithm does not need a special case for row a n . Let W u be the row directly below row b u , and let W n be the row directly below a n , if they exist. Otherwise, for the last row b k , define W k = n + 1, or if a n is the last row, then define W n = n + 1. Concretely we have n + 1 u = k, j ̸ = k + 1 or u = n, j = k + 1 In particular, W u − 1 is the column right before the next row below, or for the last row, W u − 1 = n. Finally, we say that a 0 is restricted if there is a + in a box to its left, in the same row, and unrestricted otherwise. By convention we take the leftmost + of row a n to be at (a n , L n ) with L n > n. In Example 3.1, the only restricted 0 is in box (1, 2). There are two different row types to consider for D: (I) Rows b u such that the box (b u , a n ) either does not exist, or does not have a + inD. (II) Rows b u such that the box (b u , a n ) has a + inD. Note: These rows will always be above row a n . Algorithm for filling in rows of D -Run in parallel for each row: Conditions for filling in Step 2: In Step 2, fill a box in row b u at (b u , ℓ) with a + if: (i) InD, there is a + in box (b u , ℓ). (ii) InD, there is a + in some row below, say at (b m , ℓ) where m > u, such that there are only unrestricted 0's in column ℓ in-between rows b u and b m . Note: In the case that m = u + 1, this condition holds trivially. (iii) InD, column ℓ only has unrestricted 0's below row b u . In other words in column ℓ, all the rows below b u has their leftmost + before column ℓ, i.e. their last + has already passed, where by convention the leftmost + of row a n occurs at (a n , L n ) for L n > n. Note: This is a special case of (ii). An example of the conditions is given in Figure 8. Another way to phrase condition (ii) is as follows: Look for + 's in column ℓ in rows below b u where all rows in-between have no + 's to the left of column ℓ (their leftmost + is before column ℓ). One interesting aspect of this algorithm is that for any particular row, only a part ofD is looked at, namely it looks at the row itself and the first rows below for which the leftmost + has not yet passed. The proof that this algorithm agrees with the approach via glueing L's can be sketched as follows. First, we check that for each individual L, the algorithm above builds the horizontal string of + 's using Step 1 or Step 2 condition (iii). Next, we check that the algorithm above builds the vertical part of each L using Step 3 and Step 2 conditions (i) and (ii). Counting in each case, we can directly show that the number of + 's in each vertical part of an L is s ℓ + t ℓ + 1. The argument for the strings of + 's is similar but simpler. Finally, to show the algorithm above glues the L's in each section as we described in Theorem 3.7, we can consider the possibilities for b B . For full details see Section 5.2 of [Hu21].
2023-08-15T06:43:13.805Z
2023-08-11T00:00:00.000
{ "year": 2023, "sha1": "507fe60e794f63d7d7c5f9238829251509d1845b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "507fe60e794f63d7d7c5f9238829251509d1845b", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
226282742
pes2o/s2orc
v3-fos-license
The Effects of Magnesium Sulfate with Lidocaine for Infraclavicular Brachial Plexus Block for Upper Extremity Surgeries Background  An addition of analgesic to anesthetic agents is likely to increase the effects of anesthesia and reduce associated adverse outcomes. Several adjuvants are studied in this regard. The aim of this study is to investigate the effects of adding a magnesium adjunct to lidocaine for the induction of infraclavicular block. Methods  Patients referred to Shohada Ashayer Hospital, Khorramabad, for wrist and hand surgery were enrolled in this study. The intervention/case group included patients who received 18 mL lidocaine (2%) + 2 mL magnesium sulfate (50%), 10 mL normal saline; control group: 18 mL lidocaine (2%) + 12 mL of normal saline. After the induction of ultrasound-guided infraclavicular block, parameters such as duration of reach with respect to complete sensory and motor block, hemodynamic parameters (hypotension and bradycardia), and postoperative pain, using visual analogue scale criteria, were measured. The obtained data were analyzed using a Bayesian path analysis model. Results  A total of 30 patients were included in each group. In the case group, sensory and motor block was achieved for 12.136 ± 4.96 and 13 ± 3.589 minutes more than those in the control group. The duration of sedation and immobilization was 2.57 ± 0.764 minute and 4.66 ± 0.909 minutes lengthier in the case group. Regarding the hemodynamic parameters, blood pressure was 0.217 ± 5.031 and 1.59 ± 5.14 units lower in the case group, immediately following the block and the surgery. Similarly, heart rate was 0.776 ± 4.548 and 0.39 ± 3.987 units higher in the case group, after 30 minutes and 2 hours of the procedure. A decrease in the pain was seen at 8, 10, and 12 hours after the surgery, as compared with the control group. An addition of magnesium to lidocaine for infraclavicular block resulted in a significantly longer sedation and immobilization period and decreased postoperative pain at 12 hours. Conclusion  Heart rate and blood pressure did not decrease significantly in the case group. It can be concluded that addition of magnesium sulfate to lidocaine can produce better anesthetic and analgesic outcomes with low-to-no adverse effects. Introduction Orthopedic procedures such as those related to hand, wrist, and forearm procedures, despite being minor surgical procedures, are associated with a great amount of postoperative pain. Infraclavicular neural network block is useful in causing prolonged and effective postoperative analgesia in these patients. 1 Ultrasound-guided infraclavicular brachial plexus nerve block has been practiced recently, where different approaches such as transverse, posterior costoclavicular, medial, distal, and proximal approaches are commonly used. 2 The distal approach is conventionally used; however, the proximal method is likely to be associated with the lessened use of anesthetic agents. 3 Nonetheless, pneumothorax and neuraxial spread are some common complications associated with the ultrasound guide. 4,5 Lidocaine is a local anesthetic rapid-acting agent that is used for the blockade of motor and sensory fibers for up to 1.5 hours. 6 Several adjuncts have been investigated to elevate the analgesic response of lidocaine for infraclavicular block. 7 The effect of magnesium was first recognized for the treatment of arrhythmia and preeclampsia, and its effect on anesthesia and analgesia has recently been recognized. 8,9 Magnesium sulfate has also been used as an adjunct to anesthesia in recent years. It is also an effective analgesic agent for perioperative pain. 10,11 Researches have also reported that the intraoperative use of magnesium is characterized by a reduced use of anesthetics and muscle relaxants. 12 Furthermore, opioid use, postoperative nausea and vomiting, hypertension, and shivering have met a decreased trend with the use of magnesium sulfate. 13,14 This study is designed to evaluate the effects of addition of magnesium sulfate to lidocaine for infraclavicular neural network block in pain control during and after hand, wrist, and forearm surgery in patients referred to Shohada Ashayer Hospital in 2018. Methods The aim of this study was to evaluate the effects of magnesium sulfate supplementation with lidocaine for infraclavicular nerve block for postoperative pain management following hand, wrist, and forearm surgery in patients referred to Shohada Nassir Hospital, Khorramabad between February 2018 and 2019. Patients undergoing the procedure were selected by a simple sampling method, where patients were randomly divided into two groups; group A included the patients receiving magnesium sulfate with 2% of lidocaine and group B included the patients administered saline with 2% lidocaine. Patients aged 18 to 85 years, ASA I-II class, having consent to participate in the study were included in the study. Patients with contraindication of brachial nerve block (allergy to local anesthesia, local infection at injection site, and coagulopathy), traumatic nerve injury of upper limb, history of opioid abuse, alcohol and drug abuse, recent chronic analgesic treatment, celiac and meningitis, allergic to lidocaine, peripheral neuropathy, neuromuscular disease, pregnancy and lactation, specific psychosis disorders, cognitive impairments, and those who disagreed to participate were excluded from the study. After receiving a written consent, detailed explanations of the study were provided to the patients. The patients were allotted a unique code which was only known to the nurse in-charge of the anesthesiology unit. The infraclavicular neural network block was performed by the anesthesiologist. Noninvasive monitoring (blood pressure, heart rate) was performed and Ringer's solution was infused. All patients received premedication 0.5 mg/kg midazolam and 2.2 mg/kg fentanyl, prior to the block. Under the guidance of ultrasound (Ezono 3000, Germany), using a linear ultrasound probe, the infraclavicular neural network was identified and the in-plane method with SonoPlex needle (22G) was used to inject the following as per the group allocations: case group: 18 mL lidocaine (2%) þ 2 mL magnesium sulfate (50%), 10 mL normal saline; control group: 18 mL lidocaine (2%) þ 12 mL of normal saline. The patient's block was placed in a supine position, where the patient's arm was abducted to reduce the depth between the plexus and the skin. The cords of the brachial plexus are seen as hyperechoic circles bordered by axillary artery. A needle was inserted using the in-plane method, from the inferior of the ultrasound transducer, 1 cm into the skin. After reaching the artery, anesthetic agents were induced. Following the block, the patients were evaluated for hemodynamic changes and block complications, such as pneumothorax, hypotension, bradycardia, and hematoma. The decline in sensory and motor activity was assessed after every 2 minutes, following the block, until complete sensory block was achieved. Furthermore, during the surgery, sensory and motor assessment was performed after every 5 minutes, during the first 30 minutes of the surgical procedure. Subsequently, motor and sensory activity was monitored after every 15 minutes until the end of the surgery. In any case, where anesthesia failure was seen, 30 minutes after the bock, it was marked as the infraclavicular block failure. The magnitude of the motor block was measured with Bromage scores of 16:1 ¼ complete leg movement, 2 ¼ partial movement, 3 ¼ relative movement, and 4 ¼ complete immobility. Sensory block was measured by a pinprick test: 0 ¼ no sensation, 1 ¼ sensory loss, and 2 ¼ no sensory change. The duration of the sensory block is the period between the end of local anesthetic administration and normal sensory return. The duration of the locomotor block is the period between the end of local anesthetic administration and complete motor function reversal. The patient's blood pressure and heart rate were recorded before the block, 30 minutes after the end of the injection, at the end of surgery, and 2 hours after the surgery. If the blood pressure dropped below 20% of the baseline blood pressure, 5 mg of ephedrine was injected. When the heart rate decreased below 50/minute, 0.5 mg of atropine was injected. Postoperative pain was measured at 2, 4, 6, 8, 10, and 12 hours using the visual analogue scale criterion where 0 indicated no pain and 10 indicated worst pain imaginable. The pain was evaluated based on the type of the surgery performed and patients were educated accordingly regarding the perception of the pain. Side effects such as nausea, vomiting, bradycardia, hypotension, and itching at 4, 8, and 12 hours after surgery were also evaluated. The duration of the sensory block was the primary finding, whereas the duration of the motor block, onset of sensory and motor block, total opioid use, and postoperative pain score were secondary outcomes of the study. The obtained data were recorded in the evaluation form and were assessed using SPSS V. 21. To investigate the effect of intervention (group A: lidocaine with magnesium sulfate; group B: saline with lidocaine) on the dependent variables in the neural network block subgroup (duration of anesthesia, onset anesthesia time, and onset immobility time), biological factors (blood pressure and heart rate), and pain at different times, a Bayesian path analysis model was used that could help us determine the significant correlation between the dependent variables in each subgroup. Since the correlation between the dependent variables is significant, concurrent statistical inferences were required, to determine the effects of the intervention on the dependent factors, for which route analysis was exploited. In Bayesian inference, the validity intervals, instead of p-value, are used to examine the significance of the effect of the intervention variable on the dependent variables. If the interval of validity is zero, the intervention was known to have no significant effect. Results The results of the study regarding the effects of intervention on dependent variables in the neural network block subgroup using Bayesian path analysis are shown below. Determination and Comparison of Sensory Block Duration in the Two Study Groups According to the obtained validity interval, it can be concluded that the intervention had a significant effect on the duration of anesthesia, as the duration of anesthesia for the target group patients was approximately 12.13 AE 4.96 minutes longer than the control group. In addition, variables such as age, gender, and body mass index (BMI) have significant effects on the duration of anesthesia (►Table 1). Determination and Comparison of Motor Block Duration in the Two Study Groups According to the obtained interval of validity, it can be deduced that the intervention had a significant effect on the duration of immobility, as the duration of anesthesia for the target group patients was approximately 13.14 AE 3.589 minutes longer than the control group. In addition, variables such as age, gender, and BMI have significant effects on the duration of immobility (►Table 2). Determination and Comparison of Time to Anesthesia in the Two Study Groups The mean time to anesthetize for the case group was approximately 2.57 AE 0.764 minutes longer than the control group. Considering the validity interval, it can be concluded that the intervention had a significant effect on the time to anesthetize. In addition, age and BMI had a significant effect, as well. However, no such difference was reported in terms of gender. Determination and Comparison of the Onset Time of Immobility in the Two Study Groups The median time to onset of immobilization in the case group was approximately 4.66 AE 0.909 minutes, which was lengthier than the control group. Furthermore, the variables such as age, gender, and BMI also had a significant effect on the onset time of immobility (►Table 3 and ►Fig. 1). Determination and Comparison of Mean Blood Pressure before Nerve Block in the Two Study Groups The mean blood pressure before neural block in the case group was estimated to be 3.1 AE 4.941 times higher than the control group. However, the difference did not show any statistical significance. Additionally, there was no significant difference in the mean blood pressure in terms of gender; however, the effects of age and BMI on blood pressure were significant. Determination and Comparison of Mean Blood Pressure after Nerve Block in the Two Study Groups The mean blood pressure after the nerve block for the patients in the case group was estimated to be approximately 0.21 AE 5.031 units lower than the control group, which was not statistically significant . In addition, there was no significant difference in mean blood pressure among gender groups. Nonetheless, the effect of age and BMI on blood pressure was significant. Determination and Comparison of Mean Blood Pressure after Surgery in the Two Study Groups The mean blood pressure after surgery for the case group was estimated to be approximately 1.59 AE 5.14 units lesser than the control group. Statistically, there were no significant differences in this variable. But the effect of age, gender, and BMI on blood pressure was significant. Determination and Comparison of Mean Heart Rate before Neural Block in the Two Study Groups The mean heart rate before the nerve block for the patients in the case group was estimated to be approximately 3.58 AE 4.44 units lesser than the control group. The association was not found to be statistically significant. Also, gender had no significant effect on the heart rate. But the effect of age and BMI on the heart rate was significant. Determination and Comparison of Mean Heart Rate after Surgery in the Two Study Groups The mean heart rate after the surgery for the patients in the case group was approximately 1.02 AE 3.98 units lower than the control group. Considering the obtained validity interval, it can be deduced that the mean heart rate after the surgery did not differ significantly between the case and control groups. Also, gender had no significant effect on the mean heart rate after surgery. But the effect of age and BMI on heart rate was significant. Determination and Comparison of Mean Heart Rate 30 minutes after Surgery in the Two Study Groups The mean heart rate 30 minutes after surgery for the patients in the case group was approximately 0.776 AE 4.58 units higher than the control group. According to the obtained validity interval, it can be deduced that this difference in heart rate between the case and control groups was not statistically significant. But the effect of age, gender, and BMI on heart rate was significant. Determination and Comparison of Mean Heart Rate 2 hours after Surgery in the Two Study Groups The mean heart rate 2 hours after surgery was approximately 0.39 AE 3.98 more in the case group than in the control group. According to the obtained validity interval, it can be deduced that this difference in heart rate between the two groups is not statistically significant. But the effect of age, gender, and BMI on the heart rate 2 hours after the surgery was significant (►Fig. 2). Determination and Comparison of Pain 2 hours after Surgery in the Two Study Groups The mean pain, following 2 hours of the surgery, in the case group was approximately 0.598 AE 0.507 more than in the control group. Based on the obtained credit gap, it can be deduced that this difference is not statistically significant. Also, the age, gender, and BMI had no significant effect. Determination and Comparison of Pain 4 Hours after Surgery in the Two Study Groups The mean pain, following 4 hours of the surgery, was approximately 0.69 AE 0.64 more in the case group than in the control group. Based on the obtained credit gap, it can be deduced that this difference is not statistically significant. Also, gender had no significant effect on pain at 4 hours after surgery. But the effect of age and BMI on pain was significant. Determination and Comparison of Pain 6 Hours after Surgery in the Two Study Groups The mean pain 6 hours after surgery was approximately 0.12 AE 0.765 more in the case group than in the control group. Based on the obtained credit gap, it can be deduced that this difference is not statistically significant. Also, gender had no significant effect on pain at 6 hours after surgery. But the effect of age and BMI on pain was significant at this time. Determination and Comparison of Pain Rate 8 Hours after Surgery in the Two Study Groups The mean pain 8 hours after surgery was approximately 0.64 AE 0.703 units lower in the case group than in the control group. Based on the obtained credit gap, it can be deduced that this difference is not statistically significant. Also, gender had no significant effect on pain at 8 hours after surgery. But the effect of age and BMI on pain was significant at this time. Determination and Comparison of Pain Rate 10 Hours after Surgery in the Two Study Groups The mean pain 10 hours after surgery was approximately 0.61 AE 0.46 units less in the case group than in the control group. Based on the obtained credit gap, it can be deduced that this difference is not statistically significant. Also age had no significant effect on pain at 10 hours after surgery. But the effect of gender and BMI on pain was significant at this time. Determination and Comparison of Pain Rate 12 Hours after Surgery in the Two Study Groups The mean pain 12 hours after surgery in the case group was approximately 0.88 AE 0.382 units lower than in the control group. Therefore, given the validity gap, it can be deduced that this difference is statistically significant. But age, gender, and BMI had no significant effect on pain at 12 hours after surgery (►Fig. 3). Discussion Studies have shown that addition of adjunct magnesium to lidocaine for intravenous regional anesthesia is associated with early onset of sensory block, increased duration of the anesthesia, and low to no side effects. 15,16 In a controlled randomized trial, Haghighi et al 17 found, in 30 patients administered magnesium along with lidocaine for axillary nerve block using the transarterial method, the duration of motor and sensory block to be significantly prolonged 18,19 and invariant with only the lidocaine group. 17 Furthermore, addition of a magnesium adjunct with lidocaine for Bier's block is also reported to decrease chronic limb pain and a number of failed treatments. 20 Our study also reports that at 12 hours following the surgery, the intervention group (magnesium þ lidocaine) had reduced incidence of postoperative pain. Shoeibi et al 21 presented in their study that the use of 10% magnesium sulfate is associated with a significant increase in the duration of spinal anesthesia in patients undergoing cesarean section surgery. 22 In a comparative study, Mirkheshti et al 23 reported that the addition of magnesium to lidocaine for upper extremity surgeries is associated with the increased onset of sensory and motor block and increased block length as compared with the usage of paracetamol with lidocaine. In a study by Turan et al, 24 in patients undergoing hand surgery under regional anesthesia, addition of 15% magnesium sulfate to lidocaine significantly decreased postoperative pain at 15, 20, 30, 40, and 50 minutes along with a reduced need of diclofenac. Nonetheless, in our study, addition of 50% magnesium sulfate to lidocaine led to a decrease in pain at 2, 6, 8, 10, and 12 hours after surgery. However, this outcome was statistically significant only at 12 hours after surgery. 25,26 Differences in the methods of statistical analysis can be one of the possible causes of the variations in the outcomes. Our study also revealed that magnesium with lidocaine is not associated with significant unstable changes in hemodynamic parameters such as blood pressure and heart rate. Similar outcomes have been reported by some recent studies conducted for axillary brachial plexus block 27 and laparotomy surgery. 28
2020-11-10T05:15:59.989Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "847c2cee87f9435d5eda584e29be4457885ac4fe", "oa_license": "CCBY", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0040-1715578.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "847c2cee87f9435d5eda584e29be4457885ac4fe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213672387
pes2o/s2orc
v3-fos-license
Effect of energy drink consumption on blood glucose level and clotting time; A comparative study on healthy male and female subjects Background: The energy drink (ED) consumption rapidly increased by youngsters in recent years. It has a high concentration of sugar, caffeine, taurine and other stimulants that enhance the mental and physical activity. The aim of the current study was to evaluate the comparative effects of taurine-based energy drinks on blood glucose (BG) level and Prothrombin time (PT) among healthy male and female subjects. Methodology: A cross-sectional single centre observational study was conducted from over a sample of 50 subjects between 18 to 25 years of age. The subjects were kept in two distinct groups as males and females and all assessments were made individually for both genders. Written informed consent was taken from each subject prior to enrolment in the study. As per the study protocol, the BG level (mg/dl) and PT (sec) were taken twice. Each subject was asked to drink 250 ml taurine based carbonated energy drink after taking the first recording of both (BG level and PT). And secondary records are taken 1 hour after the consumption of ED. The collected data was statistically analyzed using SPSS version 16. Results: The mean age of the study subjects was 22.34±2.3 years. It was seen that the post-test mean BG level significantly increased to 129.2±18.3 mg/dl (males) and 147.92±24.4 mg/dl (females). Moreover, PT was decreased both in males (96±45.8 sec) and females (78.6±20.2 sec) after ED consumption. Conclusion: ED consumption contributed to the increased BG level and increased coagulation (decreased PT) and hence indicating an increased risk of thrombosis and type 2 diabetes among persistent consumers. Further large-scale studies are required locally in order to provide sufficient evidence. Introduction Energy Drink (ED) consumption has rapidly gained popularity during the last few years and is now being used excessively throughout the world especially by the young population [1][2][3] . A single can of ED is known to contain 1000 mg of taurine, 80 mg of caffeine, glucuronolactone, glucose, herbal supplements, sweeteners, vitamins (B group) and etc 4 . Additionally, peanuts, guarana, yerba mate, etc., are also added up in some of the EDs, which directly adds up 300 mg more caffeine content to these beverages 5,6 . Regardless of the limit acceptable per day, some EDs providing instant energy contain taurine and caffeine 10 times greater than the average daily limit to be ingested. These products are being manufactured and marketed on large-scale, although there are numerous beneficial effects in association with the consumption of EDs like improvement in athletic performance, decreased weight and increased stamina. But the increasing health risks due to the ED components cannot be neglected 7 . It is mostly used by the adolescents and young adults to prevent drowsiness, for better performance, additional energy and alertness etc 2 . A study also reported exhaustion as one of the leading causes behind the excessive ED consumption 2 . A number of health issues are associated with it, increased pulse rate, insomnia, gastrointestinal disturbances, increase blood pressure (BP), iron deficiency anemia, osteoporosis and cardiovascular diseases are among the most prominent complaints [8][9][10] . Furthermore, it also promotes weight gain and dental problems 11 . Obesity and type 2 diabetes are the two well-known epidemic health conditions, the principal cause behind the promotion of these health hazards is the consumption of caffeine and taurine enriched beverages worldwide 12 . In addition, to the blood glucose, both caffeine and taurine have significant effects on blood coagulation as well. It enhances the platelet activity which in turn increases the cardiovascular risks including arrhythmias, myocardial infarction, and sudden cardiac death 13,14 . Although the direct mechanism and association is unknown but based on the rapid case filings of sufferers with cardiac problems with associated ED consumption, Food and Drug Administration (FDA) has been strictly investigating the safety of these beverages [15][16][17] . Vast literature is in favor of temporary benefits of ED consumption whereas the negative aspects are yet too explored, prolonged use might cause harmful effects to both physiological and mental health of the consumer 18 . Currently the strong marketing and availability of these harmful energy drinks has provided an open access for the young population. And due to lack of knowledge it is being purchased and consumed extensively. Therefore, the current study was conducted with the aim to explore the effects of ED on blood glucose level and prothrombin time of healthy individuals. Methodology This cross-sectional study was performed over a sample of 50 subjects, between the age group 18 to 25 years. While the subjects with any diagnosed neurological or physiological disorder were kept in the exclusion criteria. All ethical protocols were followed and study was initiated after receiving informed consent from each subject, data confidentiality was maintained. The subjects were divided into two groups based on the gender as males and females. And all assessments were made individually for both genders. Prior to the experimentation, a study questionnaire for details regarding the caffeine and sugar consumption over the previous 12 hours was given to each subject. The blood sugar level was taken once before and then after ED consumption through glucometer. Moreover, the clotting time was also observed and the assessment was carried out by drop method. Each subject was asked to drink 250 ml taurine based carbonated energy drink after taking first recording of both (blood glucose level and clotting time). And secondary records are taken 1 hour after consumption of ED. Data was statistically analyzed using SPSS version 16, mean and standard deviation were used for data interpretation. Results Out of 50 enrolled subjects, 25 belonged to each gender with the mean age of 22.34±2.3 years. The BG level (mg/dl) and PT (sec) was assessed both before and 1 hour after ED consumption. It was seen that the BG level was relatively high among females both before and after ED consumption as compared to males i.e. a mean increase of 30.6 mg/dl (males) and 44 mg/dl (females) in BG level after ED consumption was observed. The effect of ED on PT was also prominent, the clotting time significantly decreased among both males (-21 sec) and females (-24.6 sec). Discussion Our findings indicated that ED consumption has significant impact on BG level and coagulation (PT). Based on the recent statistics from a market research report ED have gained much popularity recently which has greatly increased the risk ratio especially among youth. There are a number of physiological and mental variations associated with excessive ED consumption, the high intake leads to altered release of renin, catecholamine and dopamine that stimulates the central nervous system (CNS) increasing the blood pressure (BP) and heart rate (HR) 8 . Therefore, this study was conducted with the aim to evaluate the influence of ED consumption on healthy human body. According to World health organization (WHO) report, increased taxation on ED would ultimately decrease the consumption and reduce the health risks 19 . In support, Fiscal policies for Diet and Prevention of Noncommunicable Diseases (NCDs) reports suggested that an overall increase of 20% should be made on all sugary drinks to reduce consumption of these products 20 . Based on the nutritional aspect, sugar has no significance in diet, it is recommended that the consumption of free sugars is as harmful as a single serving of 250 ml of sugary drinks per day 19 . It is reported that turkey has restricted the ED consumption and under age utilization is strictly controlled recently 21 . While several Increased coagulation (decreased PT) was observed in response to the ED consumption, a mean decrease in PT in both genders ( Table 1). The ED affect the coagulation by directly altering the platelet activity it enhances the platelet aggregation via arachidonic acid, the transformations are visible 1 hour after consumption of ED 23,24 . Supported by a similar study conducted in Australia, significant decrease in coagulation time was observed following ED consumption i.e. 13.7+/-3.7% aggregation before having ED decreased to 0.3+/-0.8% aggregation after ED consumption (p<0.01) 7 . Large clinical studies are recommended to further investigate the impact of these EDs on healthy human body, strengthen policies for manufacture and marketing of these harmful beverages and to propose certain program for management of its excessive consumption. The major limitation of the study was small sample size, restricted resources and lack of voluntary participation. Such studies should be initiated locally, specifying the frequency of consumption, factors promoting the use, media and social impacts endorsing the consumption of ED. Programs and campaigns must be initiated in order to provide knowledge and awareness regarding negative impacts of these drinks for both the consumers and the manufacturers. Conclusion It can be concluded form the study results that excessive ED consumption might cause increase blood glucose concertation and also effects the clotting process. No, significant dissimilarities were observed between the two genders, the variations after ED consumption was in similar trend for both males and females. Hence, excessive ED consumption can cause serious health issues through cardiovascular and metabolic changes, increasing the risk for long-term disease conditions. It is recommended that large scale studies must be conducted to explore ED consumption, its impact and side-effects caused by excessive ED consumption. Conflicts of Interest None. Acknowledgement I would like to acknowledge the study subjects for their active participation. Funding None.
2020-02-27T09:10:15.037Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "d5759544dd16ab2b284a4a99a4bfe85b4030e6b9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.29052/ijehsr.v7.i4.2019.166-171", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "03fa03f2a3ba51981b95cf1f1f6c02ca3dfe5bfc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247320341
pes2o/s2orc
v3-fos-license
Female zebrafish (Danio rerio) demonstrate stronger preference for established shoals over newly-formed shoals in the three-tank open-swim preference test Zebrafish (Danio rerio) share a considerable amount of biological similarity with mammals, including identical or homologous gene expression pathways, neurotransmitters, hormones, and cellular receptors. Zebrafish also display complex social behaviors like shoaling and schooling, making them an attractive model for investigating normal social behavior as well as exploring impaired social function conditions such as autism spectrum disorders. Newly-formed and established shoals exhibit distinct behavior patterns and inter-member interactions that can convey the group’s social stability. We used a three-chamber open-swim preference test to determine whether individual zebrafish show a preference for an established shoal over a newly-formed shoal. Results indicated that both sexes maintained greater proximity to arena zones nearest to the established shoal stimulus. In addition, we report the novel application of Shannon entropy to discover sex differences in systematicity of responses not revealed by unit-based measurements; male subjects spent more time investigating between the two shoals than female subjects. This novel technique using established versus newly-formed shoals can be used in future studies testing transgenics and pharmacological treatments that mimic autism spectrum disorder and other disorders that affect social interaction. Introduction Zebrafish share many relevant genes with mammals, promoting the species as a useful model for neuroscience, biomedical and human behavioral disorder research [1,2]. Zebrafish have been used as an animal model for disorders of the nervous system including anxiety [1,3], addiction [4], epilepsy [5], and several neurodegenerative diseases [6,7]. Zebrafish also demonstrate sociability and many aspects of grouping behaviors [8-10] and are therefore a viable model for investigating social behavior as well. Zebrafish display a strong preference towards joining a shoal with live conspecifics versus remaining socially isolated [11][12][13][14]. Group predatory defense mechanisms. Due to the many eyes hypothesis, the predation risk of shoaling can be minimized by the benefits of aggregation, as a large group is better equipped than an isolated animal to detect predation compared [15,16]. Heightened shoaling behavior could also be an indicator of positive affect as it can be spontaneous and involves fewer antagonistic interactions [17]. However, group living can also provide adverse circumstances, as parasites are more easily spread in large groups, and large groups can be more prone to targeting by predators [18]. In addition, intra-group competition for food and resource limitation grows as shelter size increases [19]. It is necessary to determine affinity for specific shoal characteristics to completely understand the nuances of this complex behavior and apply it to human disease models. While many studies have addressed zebrafish preference based on visual characteristics such as fish size [20], shoal size, male-to-female ratio, stripe pattern, etc. [21], less data exists regarding the role of intragroup familiarity in zebrafish social preference. Our results indicate that a single test subject can differentiate between an established shoal and a newly-formed shoal, and both male and female subjects prefer to spend more time in proximity with the established shoal. Female test fish explore more within the vertical column nearest the established shoal while male test fish make more cross-tank transitions. Zebrafish exhibit preferences in social choice Previous studies indicate characteristics driving fish shoal preference such as shoal size [22], shape [23], and parasite load [24]. Visual cues specific to individual fish within shoals is also an important determinant of shoaling behavior. Zebrafish are able to identify phenotypic differences in stripe pattern and exhibit shoaling preference dependent on early exposure to specific patterns that do not depend on their own phenotype [25,26]. European minnows prefer to interact with shoals that are known to them [27] even if the familiar shoal is the smaller shoal; however, no research has presented subjects with a choice between novel shoals that have different levels of inter-member familiarity. Social preferences based on visual cues suggest that shoal features may also influence preference of a lone fish [28]. Sex differences for shoaling preference. Certain aspects of shoaling behavior differ between males and females [29]. Interestingly, male zebrafish display bolder responses than females in both the open-field test and the novel object test [30], and male zebrafish are more exploratory of novel environments than females [31]. Shoal sex and size represent two qualities that may influence the affinity a subject has for a particular shoal. Previous studies have supported the presence of sex differences in zebrafish involving choice between joining a shoal or remaining segregated [32]. In one study, a single subject chose to spend time in proximity with a singular same-sex fish than with a conspecific of the opposite sex [13,33]. However, another study that observed subject partiality between shoals demonstrated that males preferred to shoal with females, though females demonstrated no clear preference to shoal with one sex over the other [13]. Though male zebrafish have demonstrated a lack of significant preference for a particular shoal size, females display a clear preference for larger shoals compared to smaller ones [12,13]. Because females may seek shoaling for protection purposes, prioritizing shoal size over shoal sex is anticipated in female shoaling behavior; larger shoals provide greater protection from predators through enhancing the confusion effect [22,32] and increasing the dilution effect [33,34], thus affording greater protection for each member of the shoal [12]. Previous research indicates that predation risk is a likely reason for female sexual segregation [31]. The dynamics and safety of shoal stability may also impact a female's shoal preference when shoal size is modulated [12]. However, both male and female zebrafish prefer to affiliate with shoals rather than remain socially isolated, emphasizing innate social tendencies commonly observed among the species [12,13]. Because both females and males show preferences for different shoal characteristics such as size and collective group shape, additional factors likely influence zebrafish shoaling behavior; other criteria may include parasitism [35], distance from predators [36], and presence of poor competitors [37]. Novel vs. established shoals. A newly formed shoal generates immediate competition between members, resulting in the formation of dominant and subordinate roles within the shoal [38]. A previous study examined the activity of both dominant and subordinate male zebrafish immediately after grouping and again after five days of acclimation to conspecifics. Behavioral observations on the fifth day suggest that the formation of a stable social hierarchy occurs within the first five days following initial group formation [39]. These visible alterations in behavioral tendencies after shoal formation suggest that an established shoal may appear as a less threatening environment, while a newly-formed shoal may present as a more harmful environment due to the lack of established hierarchical roles. In addition to known sexually dimorphic behavioral characteristics in zebrafish [40,41], dominance and aggression behavior patterns within shoals might differ between males and females. For instance, dominant males are more aggressive with their submissive conspecifics than dominant females [42], and males demonstrate stronger lateralization during aggressive responding than females [43]. Behavioral markers of previously established shoals have been observed and characterized. The Trinidadian guppy (Poecilia reticulata) and fathead minnow (Pimephales promelas) exhibit shoaling behaviors similar to those displayed by zebrafish and are also used as models for sociality and social behavior [44,45]. Findings from investigations on Trinidadian guppy shoal fusion indicate a gradual decrease in the mean difference in shoal member size after two shoals were introduced. Nearly all previously established shoals reformed new groups due to fish size preference [46]; the shoal fusion that took place illustrates the capacity of fish to choose a shoal based on member characteristics. Previous explorations of fathead minnows have revealed variations in behavioral responses under conditions that are indicative of a predatory threat [47]. Through observing behaviors of familiarity, or lack thereof, one can determine whether a newly-formed or established shoal is more stable when presented with a potential predator. Established shoals with intergroup familiarity demonstrated more tight shoaling behavior, less freezing, and more dashing behavior. Additionally, these members performed more inspection visits compared to subjects in the newly-formed shoal [47]. These findings support the notion that a previously established shoal provides heightened security during potential threats of predation. Based on the increased likelihood of survival, it is possible that zebrafish and minnows may choose to join a shoal that presents as established. In addition, previous research suggests that intragroup familiarity is accompanied by more efficient communication between members, providing a less dangerous social living space [48,49]. Shoal cohesion is characterized by interactions between conspecifics in response to their changing environment [50]; therefore, observing intermember communications may help determine the capacity for different shoals to attract isolated fish. Recently, investigators examined the preference of both male and female wildtype zebrafish upon providing subjects with the choice between a familiar fish and a novel fish. Both male and female subjects exhibited a preference for a novel conspecific rather than a familiar one [11]. The implications from this study strengthen the conjecture that zebrafish possess social memory. It is therefore likely that swimming pattern and tank localization can serve as dependent variables to identify affinity toward shoals of varying levels of establishment. We suggest that, in conjunction with the ability to differentiate between shoals, zebrafish are capable of detecting the degree of polarization of a particular shoal by observing swimming patterns of the shoal members. Previous investigations have observed longitudinal patterns of shoaling and schooling of zebrafish to identify characteristic differences of the two behaviors [51]. Polarization represents the tendency of a group to swim in the same direction. Based on differential characteristic swimming patterns, shoals are seen as a low-polarized group, while a school is considered a high-polarized group [16,52]. When relative location and average movement velocity of subjects were analyzed over a five-day period, polarization decreased after this interval, possibly due to a higher level of perceived comfort or safety [53]. These findings raise the question of whether acclimatizing to an established shoal appears more attractive to a single test fish rather than acclimatizing to a newly formed, and possibly more polarized, shoal. Namely, will a new zebrafish "fit in" better with an established shoal because it is seeking comfort and safety? Prior research has not only considered the innate social tendency of zebrafish to shoal [29] but has also investigated numerous aspects of zebrafish behavior within a shoal [47,52,54]. Nonetheless, few studies have examined the role of cohesion and shoal stability in determining preferences between shoals. Further, though some studies support the conjecture that zebrafish exhibit evidence of social memory [11,55], it is still unknown whether a lone fish can perceive the visual differences of a newly-formed versus an established shoal, and how those differences might affect social choice. Prior research in our lab demonstrated the experimental efficacy of a novel open-swim paradigm for studying zebrafish social preference [56]. We used the openswim task to test zebrafish preference for an established shoal over a newly-formed shoal. The test fish was placed in the center compartment of a three-chamber tank system while both an established and a newly-formed shoal were each displayed in the two flanking tanks. Given the reduced aggression and increased cohesion found in established shoals, both male and female zebrafish subjects demonstrated proximal preference for an established shoal over a newlyforming shoal. Furthermore, notable sex differences were demonstrated with female zebrafish showing stronger preference for established shoals over newly-formed shoals while also displaying lower behavioral entropy than male zebrafish. The results from the current study expand the present knowledge on shoaling preference and can be used in future studies of social preference in wild-type as well as transgenic lines of zebrafish. Subjects The experimental subjects (N = 82) were healthy mature male (n = 45) and female (n = 37) wild-type zebrafish (EKK strain) of approximately 6-12 months of age and 2.5-6.4 cm in length. Experimental subjects as well as the fish used as stimuli were obtained from Aquatica Tropicals, Inc. (Ruskin, Florida, USA). Velkey et al [56] used a total of 78 subjects (males and females with no tests for sex differences) in conditions testing a live-shoal stimulus (vs. either a video stimulus or a mobile-model stimulus) in their study which revealed significant preferences for live-shoal stimuli. As such, a similar sample size was used for the current study. All care and treatment of subjects in the present study were consistent with the recommendations in the Guide for the Care and use of Laboratory Animals [57]. The research was conducted under an existing protocol (#2019-8) reviewed and approved by the Christopher Newport University Institutional Animal Care & Use Committee. Materials, apparatus, and procedure Subjects were sexed and housed in eight separate holding tanks (20.3 x 30.5 x 50.8 cm), each holding 37.8 liters (10 gal) of conditioned water maintained at a temperature of 28.5˚C. Visual barriers were placed between holding tanks to avoid further familiarity. Four groups of males and four groups of females were housed in the eight holding tanks so that each tank consisted of an equal number of fish. The maximum fish density in the holding tanks was approximately 1.5 fish per gallon. The water had constant filtration and aeration systems, and all fish were housed under a 14-hour light/10-hour dark cycle. Fish were fed daily using the Aquaneering Scientific Hatcheries Diet for Danio rerio. Fifty percent water changes were performed weekly for all housing and experimental tanks. The present study used newly-formed and established shoals as stimuli. Novel shoals were grouped in the morning and used the same day of experimentation as a stimulus; to ensure the demonstration of behaviors that characterize a shoal as novel, each shoal member was randomly selected from one of the four separate holding tanks. Shoals were held for seven days in groups of four fish in order to establish intra-shoal familiarity [58], and these shoals were subsequently used as the established shoal stimulus. The objective of the present study was to determine the behavioral responses of test subjects when they were presented with different shoaling stimuli. Experimental subjects were selected for each trial and performed an open-tank, free swim task [11,56]. The testing tank (20.3 x 25.4x 40.6 cm) was one 20.8liter (5.5 gal.) tank positioned between two stimulus tanks of the same size (Fig 1). Rosco brand Linear Polarizing Filter Sheets (#7300) were obtained from B & H Photo and Video (New York, NY) and were placed on the outer surface of the center tank between the side tanks. One of the filters had the grid oriented horizontally and the other grid was vertically oriented. With the filters in place, the stimulus fish of the tank on one end were unable to see the stimulus fish on the opposite end, but the subject in the center tank was able to see the stimuli in each of the flanking tanks. The two stimulus shoals each contained four zebrafish, all of the same sex as the test subject, but each fish was taken from a different home tank. A novel shoal was formed using one fish from each of the four same-sex holding tanks. In seven days, that same shoal was considered established and reused in trials as an established stimulus shoal. After an established shoal was used in one trial, it was added to a tank for later use so that each of the fish could be individually utilized as a subject. For each stimulus pairing, the position of each stimulus type was counterbalanced such that an equivalent number of trials were run with each stimulus on the left side as the number of trials with each stimulus on the right side. The two stimuli in each flanking compartment consisted of a same-sex newlyformed shoal on one side and a same-sex established shoal on the opposite side. Behavior tracking and analysis was conducted using EthoVision XT 15.0. Acquisition of tracks of the test subject via EthoVision XT 15.0 initiated after a 3-minute habituation period. Prior to and during the habituation period, the lateral sides and rearmost wall of the experimental tank were opaque in order to reduce the influence of the surrounding area. The lateral sides of the center tank were covered with removable opaque barriers to obstruct the test subjects' vision into the flanking tanks. After the 3-minute habituation period, the partitions on either side of the central tank were removed; the rearmost wall remained opaque. During the subsequent 6-minute period of free swimming, EthoVision XT 15.0 recorded and tracked the subject's behavior. Subject position within the central testing tank was recorded using a digital video camera mounted on a tripod positioned directly in front of the central testing tank. EthoVision XT 15.0 is a video tracking software program that can detect an animal in a live video feed, distinguish it from the background, and track the animal's movement, behavior, and activity. Etho-Vision XT 15.0 was programmed to analyze each recording in real-time, thus acquiring measurements of spatial location and subjects' proximity to adjacent tanks. Additionally, data were extracted offline to refine evaluation of swimming patterns and density. The tracking area within the central testing tank was divided into four quadrants (upper left, lower left, upper right, and lower right of the tank). Each quadrant of the arena constituted 25% of the total arena. Zebrafish preferences were characterized by swimming patterns as the program quantified time spent in each zone as well as zone transitions [56,59]. Spatiotemporal data for each subject were captured and processed using EthoVision XT 15.0. Because of our study's focus on the subject's response to the level of intra-shoal familiarity, we ensured equal familiarity among the subjects of both stimulus shoals from the perspective of the subject fish to minimize bias. The method and layout of subject and shoal selection ensured that the subject was previously housed with only one fish of the novel shoal and only one fish of the established shoal. Design, measures, and analyses This experiment was a 2 (Subject Sex) X 2 (Shoal Type) mixed-factorial design which was counterbalanced across both levels of presentation side for each shoal type (Novel-Left vs Established-Right or Established-Left vs Novel-right). Because no significant main effects or interactions were found for the Side factor, data were collapsed across Side for subsequent factorial analyses. The experimental design allowed for the experimental factors to be crossed against a flexibly-defined observation zone as an additional factor. As such, the analyses included four levels of quadrant (top left, bottom left, top right, bottom right) or three levels of vertical zone (left third, middle third, right third). While most statistical analyses used quadrants as the levels for the observational zone factor, certain analyses (e.g. side preference) were better examined using vertical zones as the levels of the observational zone factor. The following measures were obtained during the session with each subject: • Cumulative duration percent within each quadrant: EthoVision quantified the total time each subject spent in each quadrant during the entire session, and the subsequent percentage of time within each quadrant was calculated for each subject's session. • Percent of session time moving: EthoVision quantified the total duration of tracks recorded in each quadrant while the subject was moving at any velocity. • Percent of session time freezing: EthoVision quantified the total duration of tracks recorded in each quadrant where the subject had ceased any detectable movement for a minimum of 3 seconds. In addition, swimming patterns were monitored by experimenters to ensure that no subject remained motionless for one minute or longer, which would necessitate discontinuation of the trial and exclusion of that subject's data (no subjects were excluded under this criterion). • Average movement velocity during session: EthoVision measured velocity by dividing the distance the subject moved by the time difference between samples during motion tracking. EthoVision quantified the movement speed in mm/s for each subject's movement within each quadrant which was then used to calculate the average movement velocity within each quadrant during the session. • Variability in Velocity: The variability in the velocity across tracking samples can be characterized by the Standard Deviation of the average movement velocity within each quadrant during the session. Using IBM SPSS (v.26), factorial data were analyzed using a Linear Mixed Model (LMM) with Type III Sums of Squares at α = .05. As heterogeneity of variance is common with these types of data [56], the model was set with a diagonal covariance structure and degrees of freedom for the denominator were adjusted using the Maximum Likelihood estimator for the LMM. Significant main effects and interactions were explored using unplanned comparisons with Bonferroni correction for family-wise error. In order to characterize behavioral diversity across all zones in the observational arena, a single variable index based upon Shannon entropy [60] was calculated using the following formula: Where H n is the index of behavioral diversity, p i is the proportion of cumulative session time spent in zone i, and n is the total number of zones characterized with the index. The value of H n can range from 0 (only systematic variability) to 1.0 (completely random variability); higher values of H n indicate lower systematic variability in zone selection. This index has been used to characterize the movement of Humboldt penguins (Spheniscus humboldti) in a naturalistic zoo enclosure divided into zones of unequal sizes in order to examine the effects of live feeding events on the behavioral diversity of subjects across the enclosure [61]. Other studies have used this index to characterize the response of California blackworm (Lumbriculus variegatus to copper sulfate exposure [62] as well as the response of zebrafish and checker barbs (Puntius oligolepis) to different levels of structural complexity in artificial aquatic environments [63]. Therefore, H n is useful as a measure of characterizing systematic and random variability across a number of measures (e.g. duration in each zone) with a single index (for a review of behavioral diversity indices, see [64]). If the proportion of session time a subject in the current study spent in any particular zone is 1.0, then H n = 0.0. If a subject's proportionate time in all zones is equal across the zones, then H n = 1.0. Indices of H n between 0.0 and 1.0 indicate the extent to which a subject is systematically preferring any zone over the other zones. Results Incomplete tracking data were obtained from two female subjects, and their respective data were excluded from analyses involving zone parameters. Tracking data from the remaining 80 subjects (45 males and 35 females) were included for analyses involving zone parameters. Duration within quadrants Overall, subjects spent more time in lower zones adjacent to test shoals than in upper zones. Qualitatively, this difference in zone preference is demonstrated by location heat maps generated with Ethovision (see Fig 2). However, cumulative heat maps are limited in providing detail on the magnitude of these differences, which is better characterized using quantitative analyses. When analyzing the percent cumulative duration within quadrants, there was a sig- Behavioral diversity index Sufficient data were obtained from the two excluded females to be included in the behavioral diversity index measures, providing a total sample size of 82 subjects (34 males and 37 females). For the current data, two indices of H n were calculated for each subject. One index, (Fig 7). Taken together, these results indicate that male subjects show less systematic variability in their utilization of zones in the observation arena than female subjects, revealing sex differences in overall responding not revealed by main effects or interactions in the Sex X Zone factorial analyses of the specific dependent measures (except for the main effect of Sex on swim velocity). Discussion The results presented here indicate that solitary zebrafish can differentiate between established and novel shoals based solely on visual cues and choose to spend more time near an established shoal. Movement measures indicate that both sexes spent more time motionless when in the lower quadrant nearest the established shoal. Velocity measures show that both sexes exhibited less variability in velocity when in the upper quadrant near the established shoal compared to the upper quadrant near the novel shoal, potentially indicative of less darting behavior. While both male and female zebrafish spent more time in the lower quadrant near the established shoal, there were significant differences in the manner by which males and females moved around the field; females appeared to investigate more locally while males investigated more across the two shoals. Use of the Shannon entropy measure further confirmed the increased behavioral entropy for males and provides a useful measure for future comparison across studies that use different methods and arenas. The addition of shoal stimuli of varying familiarity to the three-tank open-swim preference test provides an attractive and easy-to-use system for studies investigating zebrafish models of disorders that affect social behavior and recognition of social cues. The present analyses include a novel application of Shannon entropy to characterize the diversity in zone preference in the three-tank open-swim preference test. The three-tank open-swim preference test has been used successfully in a number of previous studies, and it is becoming increasingly popular for the investigation of social preference in both normal populations (e.g. EKK or AB wild type) [26,56,65] and in clinical models [11,61]. Previous studies established the efficacy of the technique and demonstrated experimental effects through the analysis of unitary measures such as duration in zone, swimming velocity, etc. The present study establishes the utility of a unitless behavioral diversity index to characterize the extent to which subjects demonstrate a systematic preference among the observational zones of interest in the three-tank open-swim preference task and allows for comparison of the diversity index across different zone characterizations (e.g. quadrants versus vertical thirds) and between subject groupings (e.g. sex). As the nature of the three-tank open-swim preference test is to determine the extent to which test subjects demonstrate a preference and/or avoidance of zones in proximity with test stimuli, the application of an index of behavioral diversity such as Shannon entropy can be useful for future studies of social preference in tasks involving movement in open arenas. Previous studies have demonstrated sex differences in zebrafish preference based on number of individuals in a shoal [66] and pigment patterns [67]. Our results suggest sex-specific differences in zebrafish preference when given a choice between a newly-formed shoal and an established shoal, with females exhibiting less entropy, a greater preference for the established shoal and increased average swimming speed. Swimming speed and vertical tank location during exposure to novel environments and other common stressors has been linked to anxiety levels in zebrafish [68,69]. Increased swimming speeds could indicate darting patterns that result from an expression of fear [70], while more time spent in the lower half of a novel tank is indicative of an anxious state which can be reversed with exposure to anxiolytic drugs [71]. Both male and female zebrafish exhibit differences in anxiety-like responses with females spending more time in the bottom half of the tank during a novel tank task and more time in the dark zone during a light-dark task [72]. Further, experiments on wild-caught zebrafish show that males are bolder during feeding than females [73]. Our results show a greater percentage of time spent in the lower quadrants for both sexes, indicating an anxious state for both male and female subjects. However, the faster average swimming speeds of female subjects observed in this study could be indicative of a higher level of anxiety compared to male subjects, suggesting that the social choice paradigm can elicit subtle behavioral anxiety differences compared to the novel tank task. Further, anxiety level has the potential to induce shoaling behavior, as it is exhibitive of seeking out energy rationing [74]. The subtle behavioral differences exhibited by females in our study (lower entropy and higher tendency to prefer the established shoal) is potentially due to higher anxiety levels. Zebrafish place preference can be influenced by chemical [70,75] or live stimuli [56,76], and a three-chamber apparatus is commonly utilized to quantify preference depending on subject swimming patterns [11,56]. As an example of place bias indicating social choice, Social Preference Index has been utilized in several studies concerning social preference in zebrafish, equating proximity with preference for interaction [77][78][79]. Swimming patterns localized near one stimulus over another could indicate preference, but other driving forces of social behavior, such as investigative or aggressive interactions, should not be discounted. The present study solely investigated social interaction based on visual information and did not delineate between aggression versus investigation. However, aggressive interaction or investigation would also indicate a change in social interaction that occurs due to the observed differences in stimuli shoal cohesion, suggesting that the subject fish were still able to differentiate between shoal types. A salient component of Fetal Alcohol Spectrum Disorders (FASD), Autism Spectrum Disorders (ASD), and several other neurodevelopmental disorders' symptomatology is the display of atypical social behavior. Since the zebrafish model exists at the intersection of behavioral complexity and biological simplicity, use of the zebrafish to study neurodevelopmental disorders has recently gained popularity. Shoaling tendencies are first distinguishably exhibited by young zebrafish approximately two weeks post-hatch, and shoaling as a species-specific behavior is critically influenced by early life experience [58]. Zebrafish exposed to alcohol at the embryonic stage have been shown to display impaired shoaling behavior development [80]. In humans, FASD is a life-long disorder, and social impairments including social withdrawal and depression persist for the duration of a patient's life. Similarly, embryonic zebrafish exposed to ethanol exhibit severely reduced shoaling responses that continue two years following initial exposure and derive from central nervous system changes rather than motor or visual dysfunction [80,81]. The established versus new shoaling model outlined here contributes to understanding social behavior in typically-developing zebrafish, though the model may be useful to better classify social impairment in neurodevelopmental disorders such as FASD or ASD. Social interaction difficulties in FASD and ASD may be evident through a failure to recognize differences in established versus new shoals. Such behavior would be supported by a lack of preference for either shoal, shown by a subject spending equal amounts of time in close proximity to either stimuli. The DYRK1A gene, located in the "Down Syndrome Critical Region DSCR" has been identified as a significant element in the pathogenesis of ASD in humans [82]. DYRK1A mutation in humans is connected to intellectual impediments, microcephaly, and ASD. When DYRK1A is knocked-out in zebrafish (DYRK1A KO), affected subjects exhibit social abnormalities parallel to those displayed by human ASD patients. Specifically, DYRK1A KO resulted in decreased expression of c-fos, a proto-oncogene important for cellular proliferation and differentiation [82,83]. When presented with a three-member social stimuli shoal, DYRK1A KO zebrafish spent significantly less time in the zone of closest proximity to the shoal compared to wildtype (WT) zebrafish [82]. Using the DYRK1A KO in our three-chamber social choice model with established versus newly-formed shoals as flanking stimuli would determine if DYRK1A expression is necessary for identifying intragroup familiarity between shoals. Our findings and conclusions present implications for both basic research on the mechanisms of social preference in animals as well as the aforementioned zebrafish models of human disease and behavioral dysfunction. Future research on social preference using the three-tank open-swim preference test could explore the various characteristics of intra-shoal activity that indicate social novelty within a newly-established shoal which are subsequently detected by the observing subject. In addition, future research could also explore whether other factors such as age, size, or health status affect preference for an established shoal, and whether other features of test shoals and/or individual subjects can override the preference for established shoals. In order to extend and further explore various aspects of subjects' movement, future researchers could use a more-sophisticated dual-camera setup that allows for analysis of three-dimensional movement data [84,85] which may subsequently reveal other, perhaps more subtle, differences in shoal preference. Finally, the demonstrated value of Shannon entropy in the current study introduces new possibilities for the comparison of arena-based movement of subjects under a variety of conditions across studies.
2022-03-10T14:21:01.284Z
2022-03-07T00:00:00.000
{ "year": 2022, "sha1": "53899dfdcdcb7b1f9f695a3b15fb13fa4c9dbc1a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0265703&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a631c6bc6bc21f0e18d567cc255c922029072a5b", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Biology" ] }
234015372
pes2o/s2orc
v3-fos-license
The effect of papain and bromelain enzymes on the growth and feed utilization of post larvae Litopenaeus vannamei The aim of this research was to evaluate the effect of papain and bromelain enzymes in feed on growth and feed utilization of post larvae Litopenaeus vannamei shrimp. The research was conducted during 23 January to 2 March 2020 at the Fish Hatchery Laboratory, Faculty of Marine and Fisheries, Syiah Kuala University. The completely randomized design consisting of four treatments and three replications were used in this study (A (feed without enzyme); B (feed + 1% papain enzyme); C (feed + 1% bromelain enzyme), and D (feed + 1% papain enzyme + 1% bromelain enzyme)). The shrimp was fed with the tested feed five times on 07.00 AM, 11.00 AM, 03.00 PM, 07.00 PM and 11.00 PM for 42 days. The ANOVA test results showed the addition of papain and bromelain enzymes in the feed had a significant effect on weight gain, daily growth rate and specific growth rate of vannamei shrimp (P<0.05). However, there was no significant effect on its survival, feed conversion and feed efficiency (P>0.05). The best treatment was found in treatment D with the weight gain of 1.15 g, daily growth rate of 0.027 g day-1, specific growth rate of 7.14% day-1, feed conversion ratio of 1.85, and feed efficiency 54.15%, but the highest survival was found at treatment A. It is concluded that the best treatment is a combination between of 1% papain enzyme and 1% bromelain enzyme). Introduction The success of vannamei shrimp cultivation is supported by the availability of feed, because it affects growth and survival [1]. The feed given to cultured shrimp must have good quality such as high nutritional content and easily digested by the shrimp gut, which functions as a support for growth and high feed efficiency [2]. One of the ways to increase feed efficiency is optimizing the digestion and absorption of feed. Optimal digestion and absorption of feed in shrimp can occur with the addition of digestive enzymes. There are several enzymes that are commonly added to feed to increase feed digestibility, including papain and bromelain enzymes. Papain comes from papaya sap which contains protease enzymes, which are proteolytic and function as protein hydrolysis. According to Rostika et al. [3] Enzymes in feed can increase protein absorption and the digestive system in the digestive tract. There are several enzymes commonly added to feed to increase feed digestibility, including papain and bromelain enzymes. Ellson et al. [4] said papain is a proteolytic enzyme isolated from the papaya fruit sap e tapping, or it could also come from papaya leaves (Carica papaya L.). Than according to Manush [5] the performance of protease in the digestive tract is the main thing in terms of digestibility and the efficiency of protein digestion. The papain enzyme found in feed helps accelerate the digestive process of shrimp by breaking down protein into simpler protein so that it is easily digested and absorbed by the intestines and increases the growth rate. Several other studies regarding the use of the enzyme papain in feed which can increase the growth of fish [6, 7 and 8] and shrimp [9] have also been reported. In addition to the papain enzyme, the use of the enzyme bromelain also plays a role in the process of digestion of food. Bromelain enzyme can be produced from pineapple extract. Research by Rachmawati and Samidjan [10] stated that the addition of pineapple extract in feed can affect the efficiency of feed utilization and growth of vaname shrimp. Research by Rostika et al. [3] the combination of papain and bromelain enzymes in feed could increase the rate of Daily Growth Rate and Feed Utilization Efficiency of Pangasianodon hypophthalmus. Therefore, the researchers studied the effect of adding the combination of papain and bromelain enzymes in feed in increasing the growth of vaname shrimp. The aim of this research was to evaluate the effect of papain and bromelain enzymes in feed on growth and feed utilization in post larvae Litopenaeus vannamei shrimp. Tools and Material The tools used in this research are DO meters, digital scales, thermometers, pH meters, rulers, filters, plastic containers, aerators. The ingredients used are L. vannamei shrimp, fish meal, shrimp head meal, young pineapple, head meal, tofu waste meal, tapioca flour, bran, corn meal, fish oil, vitamins, mineral, papain enzyme, bromelain enzyme. Method The study was conducted at the fish hatchery Laboratory, Marine and Fisheries Faculty, Syiah Kuala University, Banda Aceh, Indonesia, during 42 days (23 January -2 March 2020). The L. vannamei was stocked in a plastic bucket (35 L), totaling 35 shrimp/buckets with a weight of 0.06 g/shrimp. Shrimp is kept for 42 days. Sampling is done every 7 days. The frequency of feeding was carried out 5 times a day, namely at 07.00, 11.00, 15.00, 19.00 and 23.00 WIB. Feed is given as much as 5% of the weight of the shrimp biomass. The design of this study is to use a completely randomized design (CRD) using 4 treatments and 4 replications. The treatments tested in this study are as follows: A = test feed without enzymes B = test feed + papain enzyme 1% C = test feed + bromelain enzyme 1% D = test feed + papain enzyme 1% + bromelain enzyme 1% Bromelain Enzyme Extraction Pineapple fruit cleaned and mashed in a blender, then homogenized with a cold pH 7.0 phosphate buffer as much as 1: 1. The solution obtained was centrifuged at 3000 g for 15 minutes. Furthermore, the supernatant separated from the sediment or called a pellet. The supernatant obtained was a crude extract of the bromelain enzyme [11]. Test Feed Formula The feed formulation as shown in the Table 1. Results and Discussion The results showed that the addition of papain and bromelain enzymes in the feed had an effect on growth performance, where the best results were obtained in the combination treatment of papain 1% enzymes and 1% bromelain enzymes (P <0.05). However, the addition of these enzymes in the feed did not affect the utilization of the feed and the survival of the vannamei shrimp larvae (P> 0.05) (Tabel 2). The results showed that the combination of the two enzymes gave the best results compared to using the enzyme alone. It is assumed that the two enzymes have complementary functions in helping the digestive process of feed. According to Winarno [16] the papain enzyme works more actively to hydrolyze vegetable protein, while the bromelain enzyme works more actively to hydrolyze animal protein. As reported by Kordi [17], vannamei shrimp is an omnivore scavenger and is very greedy. Thus, the role of the use of papain and bromelain enzymes here is thought to help hydrolyze feed protein into simpler molecules so that it can optimize digestion and absorption of feed by shrimp to increase growth. However, further research is needed to obtain optimal enzyme concentrations for the utilization of vaname shrimp feed. A report on research on the use of papain and bromelain enzymes in feed has been reported by Taqwdasbriliani et al. [18] namely the addition of a combination of papain and bromelain enzymes in commercial feed of the tiger grouper (Epinephelus fuscoguttatus) had an effect on fish growth, feed use efficiency and protein efficiency. Ananda et al. [19] stated that the addition of the enzyme papain in feed had a better growth rate than without the enzyme papain. Nisrinah [20] also stated that the addition of bromelain enzymes in feed is better than feed that did not contain bromelain. Mo et al. [21] reported an increase in the growth performance of Ctenopharyngodon idellus in general (feed conversion ratio, protein efficiency ratio and relative weight gain) was observed in feed added 5 g/kg -1 of Saccharomyces cerevisiae and enzymes (bromelain and papain, with a ratio of 1: 1). Feed efficiency Vannamei shrimp survival rates during maintenance were treated A (81.9%), B (73.33%), C (70.48%) and D (76.19%) shown in Figure 1. The best survival rate was obtained at treatment A, but it was not statistically significant with other treatments (P>0.05). This may be due to the fact that survival is not only affected by feed, but also by environmental factors (water quality) and handling during the study. The survival of shrimp is influenced by internal and external factors. Internal factors come from 10 However, the water quality during the study was still in a good range for shrimp growth. The quality of water obtained during the study, namely temperatures ranging from 26.5-29 ᴼ C, the value of the degree of acidity (pH) ranged from 7.4 to 8.2, the value of salinity (salinity) was around 20-23 ppt and the value of dissolved oxygen (ranged DO) 4.6-5.38 ppm. The water quality is still in a good quality condition. This is consistent with the statement of Kordi [17], the growth of shrimp both at salinity of 10-30 ppt and temperature of 24-34 ᴼ C and ideal growth at salinity of 15-25 ppt and temperature of 28-31 ᴼ C. The optimum range of water quality for DO shrimp maintenance is around ≥ 3 ppm and pH 7.5-8.5 [22]. Conclusion Based on the results of research that has been carried out, the addition of papain and bromelin enzymes affected in Litopenaeus vannamei feed had an effect on growth performance, but did not affect feed utilization. The combination of the enzyme papain and bromelain can give better results than using it separately.
2021-05-10T00:03:26.960Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "ae66efc72832bec6ccae8e7a6209c1dbfa2e6283", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/674/1/012097", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1a2e1454dac6215f5bcbb37f63dbbadab3d9db73", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
232086187
pes2o/s2orc
v3-fos-license
Adolescents’ Concerns, Routines, Peer Activities, Frustration, and Optimism in the Time of COVID-19 Confinement in Spain The global outbreak of COVID-19 has brought changes in adolescents’ daily routines, restrictions to in-person interactions, and serious concerns about the situation. The purpose of this study was to explore COVID-19-related concerns, daily routines, and online peer activities during the confinement period according to sex and age groups. Additionally, the relationship of these factors and optimism along with adolescents’ frustration was examined. Participants included 1246 Spanish students aged 16–25 years old (M = 19.57; SD = 2.53; 70.8% girls). The results indicated that the top concern was their studies. COVID-19-related concerns, daily routines, and online peer activities varied by sex and age. Findings also revealed moderate to high levels of frustration, which were associated with adolescents’ main concerns, online peer activities, maintaining routines, and optimism. The results are discussed in light of their implications in designing support programs and resources to reduce the psychological impact of COVID-19 on adolescent mental health. Introduction The novel coronavirus disease (COVID-19) has become a major public health concern and was declared a pandemic by the World Health Organization in March 2020. Since the first cases were confirmed, the epidemic has rapidly spread worldwide. To control the outbreak, the Spanish government, similar to the governments of other countries all over the world, ordered a nationwide restrictive confinement for three months, starting from mid-March. During this period, a high restriction of mobility was imposed. Citizens were not allowed to leave their house except for essential reasons (e.g., buying supplies in the supermarket or pharmacy, taking care of vulnerable people, or going to work if teleworking was not possible). Schools were closed and other educational, social, cultural, artistic, sporting, or similar activities were canceled. Although adolescents and young people are at lower risk of critical COVID-19 symptoms [1], strict confinement orders entailed important changes in daily routines and social interactions. With schools closed, youths had to adapt quickly to new remote learning environments, while uncertainties about their studies and their near academic future emerged [2]. These difficulties have been particularly marked among higher education students, who were preparing for university entrance exams, taking semester assessments, or doing practical training that could not adequately be replaced by online instruction [3]. Other extracurricular (e.g., educational and sports) and out-of-home leisure activities (e.g., hanging out with friends and dating), which provide valuable resources for socialization [4] were canceled, leaving adolescents with limited opportunities for face-to-face social contact. Although online social networks and instant messaging apps may have compensated for these shortages [5], there is emerging evidence that COVID-19 confinement has increased the risk of social isolation and loneliness among youths [6]. However, changes in interpersonal relationships go beyond friendships, also affecting family interactions. Family dynamics have had to change according to the stay-at-home mandates, forcing families to spend all their time together [7]. As a consequence, adolescents may have experienced restrictions in their personal space, while parents have faced an increase in daily stressors (including demands of caregiving and parenting, teleworking, home-schooling, threat of contagion, or financial insecurity, among others). Exposure to COVID-19 challenges is having a substantial cost on psychological wellbeing [8,9]. To date, a growing number of studies among children and youths have reported high rates of anxiety and stress, along with difficulties with concentrating and worrying [10][11][12]. Although research is still progressing, it seems that from the very beginning, concerns about the consequences of COVID-19 have been particularly salient among young people. In early reports at the first stage of the outbreak in China, college students acknowledged being fearful of what was happening [13]. As this epidemic spread worldwide, this finding has been confirmed by more recent studies with adolescents and university students. These studies have found moderate levels of concern about COVID-19 [14], which were significantly related with increased anxiety and depression [15][16][17][18][19][20]. However, these worries may not be experienced by all young people in the same way. Indeed, studies indicate that levels of anxiety and fear about COVID-19 increase with age [14] and are higher among females [12,17,18,21]. Given these findings, research aimed at identifying the particular concerns that young people are experiencing would help to define better how they are coping with this crisis. To date, research has focused more on analyzing the overall levels of fear of COVID-19, e.g., [14,17,22], rather than describing adolescents' reasons for these concerns. Among the few available sources of evidence, studies indicate that the main worries of adolescents revolve around the issues that currently cause more uncertainty: the health of those more vulnerable to COVID-19 and the economic situation [23,24]. Although less studied, school-related concerns may have also become magnified [12,25], especially when considering that the closure of schools during the pandemic is a global issue [2] and that education is of central importance for adolescents' future. Another important, yet less studied, consequence of the confinement may be frustration [26]. As shown by previous research, feelings of frustration emerge in situations in which people feel pressured to comply with rules which are perceived as a threat to their freedom [27]. Although adolescents may have different motivations to adhere to imposed measures, the confinement situation has certainly involved a number of restrictions on individual choice and decision-making (i.e., limitations on non-essential movements, prohibition of gathering with friends, and the obligation of wearing a face mask), which may have resulted in elevated levels of frustration. However, in the context of this pandemic, individual experiences of feeling thwarted may have been not only a common but a harmful consequence of the lockdown. Accordingly, ample research has well established that the psychological costs of frustration are related to higher levels of stress, depression, or anxiety, among others (see [28] for a review). While there is no doubt that COVID-19 is having negative consequences in different areas of young people's lives, research investigating the factors that help adolescents handle this stressful experience is very valuable [29]. Over the last months, social and health agencies have offered guidelines to support adolescents. However, empirical evidence on the factors that mitigate the risk of psychological distress is still lacking. Among these, experts have outlined the importance of establishing stay-at-home routines and doing a variety of activities (e.g., school-work, hobbies, and exercising) to reduce the psychological stress of the confinement [30], yet few studies have examined the role that these activities play in adolescent psychological functioning. On an interpersonal level, interactions with peers may also have an important influence on the way that adolescents experience this health crisis. In adolescence, peer relationships become especially salient as they contribute not only to satisfying the needs of intimacy and companionship but also to navigate the challenges of this developmental stage [31]. During the lockdown, youths stayed connected with their friends and classmates although face-to-face interactions moved to an online setting [32]. Technology and social media may have indeed helped to compensate for the lack of in-person interactions. Nonetheless, little is known about the specific online activities that youths have been engaging in with their peers, and the role of these activities in adolescent psychological outcomes during the lockdown. Finally, some authors have drawn attention to optimism as a factor that may favor better adaptive outcomes under challenging situations [31]. As such, recent studies demonstrated that keeping a more optimistic view of the situation was related to lower rates of anxiety and depressive symptoms during the COVID-19 pandemic [32,33]. The present study focused on the psychological impact of the COVID-19 lockdown in order to offer insights into the factors related to adolescents' feelings of frustration. Specifically, the first aim was to describe adolescents' main concerns about the impact of COVID-19. It was expected that school-related concerns [12,25], concerns about the health of those more vulnerable to COVID-19, and concerns about the economic situation [23,24] would be the most salient worries among young people. In addition, it was hypothesized that COVID-19-related concerns would be higher among females [12,17,18,21] and participants in higher age groups [14]. Second, this study explored the activities that, on a daily basis or with their peers, adolescents have been engaging in during the COVID-19 lockdown. Participants were expected to engage in several stay-at-home routines [30]. Besides, considering that social media by its nature may compensate for a lack of face-to-face social interactions [32], we hypothesized that youths may have been using technology to feel supported, loved, or cared for by their friends during the confinement period. Third, given that frustration experiences have significant costs on wellbeing and diminished functioning, this study also examined adolescents' feelings of frustration during the confinement period and their link with COVID-19-related concerns, daily routines, online activities with peers, and optimism. Additionally, sex and age differences were examined. Adolescents were expected to display moderate to high levels of frustration [26]. As literature addressing the role of adolescents' concerns and routines on frustration is still scant, the association between these variables was addressed in an exploratory fashion. Participants and Procedure The study sample consisted of 1246 students (70.8% girls) from Spain. The participant age range was 16-25 years (M = 19.57; SD = 2.53). Following Steinberg [34], participants in the 16 to 18 age group were considered middle adolescents, meanwhile, participants in the 19 to 25 age group were considered late adolescents. In this study, 42.3% (n = 527) were middle adolescents and 57.7% (n = 719) were late adolescents. The distribution of participants according to sex was similar between the younger (16-18 years old; 70.4% girls) and the older (19-25 years old; 71.07% girls) age groups, χ2(1) = 0.07, p > 0.05. In terms of geographic distribution, most participants (88.9%) came from the southern area of Spain (Andalucía), although students from all other regions of Spain were also represented in the sample. All respondents were students enrolled in compulsory secondary education or professional training (17.9%), post-secondary education (24%), or university (58.1%). During the confinement, participants were living in two-parent families (86.4%), singleparent families (7%), with other relatives (1.5%), with roommates and/or their partners (4.8%), or alone (0.3%). Among the study sample, few participants (0.1%) and their relatives (3.1%) were diagnosed with COVID-19. Additionally, although testing for COVID-19 was not performed, other participants also indicated that they (7.1%) or their relatives (8.2%) had COVID-19-like symptoms. In terms of the socio-economic impact of the health crisis, more than half of the sample (52.4%) reported a significant reduction in their family's income as a result of COVID-19. Data were collected through an online survey using the Qualtrics software platform during the fifth-sixth week of the state of emergency in Spain (from 17 April to 1 May 2020). Participants were recruited through snowball sampling. The authors first distributed the online questionnaire through colleagues and potentially eligible participants who met the inclusion criteria: students between 16-25 years old, of Spanish nationality, and who were living in Spain. Then, initial participants were asked to send the questionnaire to other potential informants that met the inclusion criteria. The survey link was also shared through social media (Facebook, Instagram, Twitter, and WhatsApp), in the newsletter of the authors' university, and with school institutions and local youth organizations. Of the 1582 students who actively consented to participate, 336 respondents were excluded from the study sample. Reasons for exclusion included the following: participants did not meet the inclusion criteria; the questionnaire was blank, or respondents only answered demographic questions; and the time spent to complete the questionnaire was less than 7 min, which is significantly faster (10th percentile) than the average (11 min). The approval to conduct this study was obtained from the Ethics Committee of the Universidad Loyola Andalucía. Participation was voluntary and anonymity was guaranteed. Active informed consent was obtained prior to participation. Measures COVID-19-related concerns: An ad hoc five-item questionnaire was used to measure concerns about the impact of COVID-19 during the confinement. With the question "how much have you been worrying about . . . ?" participants indicated their concerns related to: their own health (i.e., "getting ill with COVID-19"); the health of others (i.e., "my relatives getting ill with COVID-19"); their family financial strain, currently (i.e., "your family financial situation in this moment") and in the future (i.e., "your family financial situation in few months"); and their education (i.e., "this situation could negatively affect your studies"). This measure was created considering the main results of previous studies about youth concerns [12,23]. Participants reported their level of concern with each statement on a scale ranging from 1 (nothing) to 5 (a lot). Cronbach's alpha for this study was 0.73. Daily routines during the confinement: To assess routines during the confinement, participants were asked to indicate the frequency with which they engaged in a set of five activities on a daily basis. These activities included the following: "maintaining a routine (e.g., getting up, eating . . . at the same time)"; "doing physical or sports activities"; "doing intellectual activities (e.g., studying and reading)"; "doing leisure activities (e.g., playing games, watching a series, or listening to music)"; and "doing creative activities (e.g., writing and handcrafting)". A pilot study with nine adolescents was conducted to explore if this pool of items covered all the possible daily routines during the confinement. Participants agreed that no other daily routine should be included, so this measure was tested in the full sample. Items were answered on a frequency scale ranging from 1 (never) to 5 (many times). Online peer activities during the confinement: Online activities that participants engaged in during the confinement to feel supported, loved, or cared for by their friends were measured with an ad hoc seven-item questionnaire. These activities included: "sharing personal pictures or videos of activities I do at home"; "sharing funny memes or videos"; "messaging on WhatsApp, Telegram, or others"; "making calls or video calls"; "playing online"; "doing challenges"; and "doing activities simultaneously with friends (e.g., watching series, doing homework, and playing sports)". Items were chosen based on a pilot study with nine participants who were asked about the online activities that they were doing during confinement. Items were answered on a frequency scale ranging from 1 (never) to 5 (many times). Cronbach's alpha for this study was 0.67. Feelings of frustration: The general sense of frustration during the confinement was captured using a single item created for this study. Participants were asked "in the past two weeks, to what extent did you feel frustrated?" [35]. Response options were on a Likert scale from 1 (nothing) to 5 (very much). Optimism: Dispositional optimism, deemed as the general expectation that good things will happen, was measured with the three-item optimism subscale of the Comprehensive Inventory of Thriving [36]. Respondents rated items (e.g., "I have a positive outlook on life") on a scale ranging from 1 (strongly disagree) to 5 (strongly agree). Cronbach's alpha for this study was 0.85. Plan of Analysis First, to examine adolescents' concerns about the impact of COVID-19, their daily routines during the COVID-19 lockdown, and the online peer activities used to help youths feel supported during the confinement, descriptive statistics were computed. Besides, three different two-way multivariate analyses of variance (MANOVAs) were conducted to examine mean differences in adolescents' concerns about COVID-19, their daily routines, and online peer activities based on sex, age groups, and the interaction between sex and age. Second, multiple regression analysis was calculated to examine the association of adolescents' concerns, their daily routines, online peer activities, and optimism with adolescents' experiences of frustration. We entered independent variables in the model using the stepwise method. Variables were organized in three blocks: sex and age groups were entered in Block 1; COVID-19-related concerns, online peer activities, and optimism were entered as predictors in Block 2; and finally, the five daily routines were entered in Block 3. The stepwise method used was iterative. The order in which predictors were entered into the model was based on a statistical criterion [37]. Thus, the number of models was not dependent on the number of blocks, but rather on the number of predictors that were significantly associated with the dependent variable. This method began by introducing the independent variable of Block 1 with the highest simple correlation with the outcome. If this predictor significantly improved the percentage of variance explained by the model, it was retained and another predictor was considered. The second predictor included was the next independent variable within the block that had the largest semipartial correlation with the outcome or, in other words, the predictor that explained the largest part of the remaining variance in the model. If no other variable was identified, it moved on to the next block. The analysis concluded when no more variables from any of the three blocks could make a significant contribution to the predictive power of the model. Each time a variable was introduced, all the statistics of the model were recalculated, resulting in a new model. Collinearity was assessed by calculating tolerance and Variance Inflation Factors (VIF) for each independent variable introduced in the model. The partial eta square (ηp 2 ) and the coefficient R 2 were used as measures of effect size. Concerns about COVID-19 among Youths by Sex and Age As shown in Table 1, among the top concerns were that a relative could get infected with COVID-19 and that the pandemic could impact studies. In contrast, participants were least concerned about their own health. Results from MANOVA with the five concerns about COVID-19 as dependent variables and sex and age groups as independent variables revealed significant multivariate effects for both sex, Wilks' λ = 0.94; F(5, 1063) = 14.13, p ≤ 0.001, ηp 2 = 0.06, and age, Wilks' λ = 0.99; F(5, 1063) = 2.56, p = 0.026, ηp 2 = 0.01. Subsequent univariate ANOVAs on adolescents' concerns (see Table 1) indicated that girls showed significantly higher levels of concern for all issues than boys. Regarding age differences, the results also indicated that younger adolescents were significantly more worried about their studies and the health of their relatives than their older counterparts. Finally, the multivariate interaction between sex and age was not significant, Wilks' λ = 0.99; F(5, 1063) = 0.77, p > 0.05, ηp 2 = 0.00. Daily Routines of Youths during COVID-19 Confinement by Sex and Age Regarding routines during the confinement, Table 2 displays descriptive statistics and mean comparisons based on participants' sex and age. In general, the results indicated that the most frequent activities were intellectual (e.g., studying and reading) and leisure (e.g., playing games, watching a series, and listening to music) activities. In contrast, the least frequent were creative activities (e.g., writing and handcrafting). Next, a MANOVA, including sex and age as fixed factors and daily routines as dependent variables, yielded significant multivariate effects of sex, Wilks' λ = 0.92; F(5, 1060) = 17.86, p ≤ 0.001, ηp 2 = 0.08, and age, Wilks' λ = 0.99; F(5, 1060) = 3.14, p = 0.008, ηp 2 = 0.02, on adolescents' daily routines during COVID-19 confinement. Next, univariate ANOVAs (see Table 2) indicated that girls and late adolescents were more likely to maintain daily routines during the confinement. Girls were more engaged in intellectual, creative, and sports activities on a daily basis. Similarly, younger adolescents reported doing leisure activities with greater frequency than older adolescents. No multivariate interaction effect between sex and age was found, Wilks' λ = 0.99; F(5, 1060) = 1.03, p > 0.05, partial η 2 = 0.00. Table 3 provides an overview of descriptive statistics for online peer activities during the confinement by adolescent sex and age. As shown, participants engaged in online activities to maintain relationships with friends quite regularly. Among the most frequent were activities aimed at maintaining communication (i.e., through messaging on WhatsApp, Telegram, or others, or making calls or video calls) and having companionship (i.e., doing leisure activities simultaneously with friends). Online Peer Activities Used to Help Youths Feel Supported by Friends during COVID-19 Confinement by Sex and Age A MANOVA with sex and age as independent variables and online peer activities as dependent variables provided evidence of multivariate effects for both sex, Wilks' λ = 0.82; F(7, 1189) = 36.99, p ≤ 0.001, ηp 2 = 0.18, and age, Wilks' λ = 0.97; F(7, 1189) = 5.25, p ≤ 0.001, ηp 2 = 0.03. As shown in Table 3, separate univariate ANOVAs on the outcome variables revealed that girls used the internet more extensively than boys to maintain relationships with their friends. While boys played online games with peers more often than girls, girls shared more pictures or videos of themselves doing activities at home, used WhatsApp or Telegram more often to message their friends, did more challenges, and got involved in more simultaneous activities with their friends using the internet. Similarly, when comparing online activities across age groups, the results showed that the younger group of adolescents made more calls or video calls with their friends, were more engaged in challenges, and did more activities simultaneously with their peers. Finally, a multivariate interaction effect between sex and age was observed, Wilks' λ = 0.98; F(7, 1189) = 3.04, p = 0.004, ηp 2 = 0.02. Subsequent univariate analyses indicated that the interaction between sex and age was significant for the activities of sharing videos or memes, F(1, 1195) = 6.16, p = 0.013, ηp 2 = 0.01, making calls or video calls, F(1, 1195) = 4.50, p = 0.034, ηp 2 = 0.01, and playing online games, F(1, 1195) = 7.26, p = 0.007, ηp 2 = 0.01; with boys in the youngest age group doing these activities more frequently than boys in the older age group. The multiple regression (see Table 4) revealed that sex, optimism, COVID-19-related concerns, online peer activities, maintaining daily routines, and leisure activities contributed significantly to the regression model, F(6, 1052) = 31.80, p ≤ 0.001. Together the six independent variables accounted for 15.4% of the variance in frustration. Being female, experiencing more concerns about the impacts of COVID-19, and doing online peer activities more frequently were positively related to frustration. In contrast, higher levels of optimism, maintaining daily routines, and doing more leisure activities were negatively associated with frustration. Regarding collinearity, tolerance values ranged from 0.88 to 0.92 and VIF values ranged from 1.08 to 1.13. These values indicate that the variables introduced in the model were not highly correlated and there was no collinearity among the independent variables. As a follow up, correlations between adolescents' frustration and specific aspects of COVID-19-related concerns and online peer activities were computed. In relation to COVID-19-related concerns, frustration was positively associated with concerns about their relatives getting COVID-19 (r = 0.08, p = 0.010), their family financial strain, currently (r = 0.11, p ≤ 0.001), and in the future (r = 0.13, p ≤ 0.001) as well as about their own education (r = 0.20, p ≤ 0.001). Regarding online peer activities, positive, although small correlations, were found between frustration and sharing personal pictures or videos (r = 0.07, p = 0.017), messaging friends on WhatsApp, Telegram, or others (r = 0.06, p = 0.038), and making calls or video calls (r = 0.08, p = 0.013). Discussion This paper aimed to provide insight into the concerns that young people experienced during the COVID-19 confinement in Spain as well as to explore daily and online activities. Most previous studies have analyzed the psychological impact of COVID-19 in the general or university population, but there are still few studies focused on adolescents. Besides, considering that the progress of the pandemic is still uncertain, this study has also described the degree of frustration experienced by adolescents and tried to identify factors associated with it. The first aim of this study was to explore adolescents' concerns about COVID-19. According to the hypotheses of this study, findings evidenced that adolescents were the most worried about the risk of their relatives getting sick and the least worried about their own risk of infection. Previous studies have also shown this result [14]. From a developmental perspective, the lesser degree of concern that adolescents showed about their own health may be explained from the hypothesis of the "personal fable". Previous studies suggest that a characteristic of adolescent thinking is their propensity to regard themselves as invulnerable. That is, they think that problems and difficulties are not going to happen to them. This cognitive feature is associated with greater involvement in risky behaviors [38]. In times of pandemic, breaking the social distancing guidelines or not following health and safety measures may be considered as new risk behaviors. As such, if adolescents are less concerned about their own health, they may become more involved in irresponsible behaviors that put everyone's health at risk [39,40]. For this reason, future studies may want to analyze the effectiveness of prevention campaigns aimed at fostering social responsibility. Perhaps personalizing the consequences of adolescents' risk behaviors on the health of their relatives (e.g., grandparents) may be a more effective way to achieve greater adherence to prevention measures. Beyond health concerns, the findings of this study also indicated that, consistent with research on university students [12], adolescents were very concerned about their studies. A review by Sahu [41] describes that some young people have had to adapt to online education with limited resources at home (e.g., teenagers have had to share computer equipment with different family members who need to telework and study). They have also faced an increase in the amount of schoolwork because of continuous evaluation in addition to uncertainty and pressure of highly monitored final evaluations. All of these factors may have been a source of stress for young people. Thus, these results were in line with the hypothesis of this study and underline that it is also crucial to take care of our education. Additionally, the results have contributed to understanding the role of sex and age on adolescents' experienced concerns. Regardless of the issue, girls were found to be more worried than boys. For age, although in general there were no differences, younger people were more concerned about the health of others and their studies. These results are in line with previous studies with young people and adults, which have reported increased levels of increased worry, fear, anxiety, and depression among girls [14,17,32]. Moreover, contrary to our expectations, the younger groups of participants were more worried than older adolescents. A previous study among Italian adolescents between 13 to 20 years old [14] has shown that older adolescents were moderately more concerned than younger adolescents. However, the study results are similar to previous studies with university and adult samples, which found higher levels of COVID-19 fear and concerns among younger participants [19,24]. Thus, the results highlight the relevance of understanding that the COVID-19 health crisis is not affecting all people equally, so it is necessary to redouble efforts to the most vulnerable. The second aim of this study was to provide insights into the activities that young people have been participating in during the confinement on a daily basis and with their peers. Findings suggested that adolescents have spent time on intellectual and leisure activities. In contrast, creative and physical activities were undertaken less frequently. In view of these findings, it is important to implement resources aimed at reducing sedentary behaviors and sports activities should be promoted. In this regard, previous theoretical studies have warned that the possible increase in sedentary behavior together with the excessive time spent on technology may have affected the sleep patterns of children and adolescents, with consequent risk to their development and health [42]. Regarding online activities with peers, this study has also yielded interesting results. Before the pandemic, the analysis of the use of new technologies in peer relations was undertaken in the knowledge that young people tend to alternate face-to-face interactions with online interactions. However, the pandemic situation and the lack of in-person contact has provided a unique opportunity to analyze the contribution of the online world to interpersonal relationships. According to the expectations, the results of this study indicated that young people very often used new technologies to feel supported by their friends. The most frequent online activities were conversations via instant messaging applications and the use of new technologies to do activities simultaneously with peers. In contrast, challenges and online games were less frequently undertaken. Girls used the internet more often than boys, and younger people tried to see their friends more, even if it was through the screen, using challenges, or doing the same activities simultaneously with peers. All these interactions, which are related to spending time together, are traditionally known as companionship or intimacy [43]. In-person, higher levels of companionship are associated with better psychological and relational adjustment [44]. However, the role of these relationships on adolescents' wellbeing is less known when interactions are 100% online. The third aim of this study was to examine the factors that related to adolescents' experiences of frustration during the lockdown. Although frustration itself is not an indicator of mental health, prior research has shown that it is a significant predictor of adolescents' psychological problems [28]. Findings revealed that optimism, followed by sex, COVID-19-related worries, online peer activities, daily routines, and leisure activities were significantly related to higher levels of frustration. Concerning daily routines, the results showed that keeping daily routines and doing leisure activities were related to lower levels of frustration. Possibly, routines during the confinement have been important to foster a sense of normality, providing structure in the uncertain. In further detail, subsequent analyses indicated that adolescents' more salient concerns about COVID-19 were associated with higher levels of frustration. Specifically, the findings showed that youths who were more worried about their education, the financial situation of their family, or the health of their relatives also reported more frustration. Another important finding concerns the online activities used to keep youths connected with their peers during the confinement. Although correlations were modest, results indicate that youths who spent more time sharing personal pictures or videos, messaging friends, and making calls or video calls experienced greater levels of frustration. This finding might seem counterintuitive because previous literature has shown that intimacy and companionship with peers are related to better psychological adjustment [44]. However, our results may be far from what is expected because the interaction with peers, due to the confinement, has been restricted to the online context. A previous study found that the balance between online and offline communication matters [45]. For example, among university student couples, closeness increased when online and offline communication was balanced (i.e., couples communicated by both means). In contrast, intimacy and closeness decreased when communication was only online. Thus, these results suggested that the online context was useful for socialization, especially when face-to-face interactions were restricted, but that the internet alone could not compensate or replace face-to-face interaction. Finally, it is important to note that higher levels of optimism were significantly related to lower rates of frustration. This result is consistent with research on the stress buffering effect of optimism [46]. Literature on the psychological impact of COVID-19 has also evidenced that optimism mediates the relationship between COVID-19-related stress and psychological problems, and is associated with lower levels of depression or anxiety, among others [47,48]. Perhaps adolescents with a more optimistic perspective reappraised the situation in a more positive way [49]. Thus, although further studies with adolescents are needed, the results seem to indicate that keeping an optimistic perspective is important in mitigating the psychological impacts of COVID-19. Limitations Despite the strengths of this study, some limitations should be outlined. First, the sample was mostly composed of young people from the southern area of Spain. Although the aim of the study was not to make comparisons between different areas of the country, future studies could benefit from a stratified sampling. Second, adolescents' frustration was assessed through a single-item questionnaire. The main reason was that online surveys should be short to facilitate participation. However, future studies could evaluate this construct through a validated multi-item instrument. Third, the cross-sectional design of this study did not allow for an analysis of the directionality of the relationships between the variables. Future longitudinal studies could explore the temporal association between the factors associated with adolescents' frustration. Finally, given that data were collected during the strict confinement period in Spain, this study assumed that participants were not having any actual face-to-face contact with their peers and that all their social routines were online. However, as they were not explicitly asked about the extent to which they followed the strict isolation measures imposed, it remains possible that a few individuals in the study had had actual contact with their friends. Conclusions The pandemic has led to moderate-to-high levels of frustration among adolescents. The results of this study have allowed us to delve deeper into the role that concerns, daily activities, online interactions with friends, and optimism play in the degree of frustration that young people have experienced. First, matters related to family health, family financial situation, and education were found to be associated with more frustration. Among these, the academic concern was the most relevant variable because of the magnitude of the association with adolescents' frustration. In this sense, it is necessary to develop an educational response that involves the different stakeholders (politicians, teachers, families, and students) to ensure that we take care of both students' physical and mental health. To this end, it is necessary to rethink the changes that should be implemented in order to prepare schools for a new COVID-19 outbreak. Second, findings have allowed us to make conclusions about the importance of adolescents maintaining a daily routine, and participating in physical, leisure, and creative activities. Special attention should be paid to creative activities because, despite being negatively related to frustration, they were the least frequent activity undertaken during confinement. Third, another relevant conclusion of this study was that optimism turned out to be the variable that showed a stronger negative association with adolescents' frustration. Thus, it seems that being able to reinterpret the situation positively represents a precious resource for adolescents in facing this health crisis. In summary, this work contributes to understanding the emotional impact that the COVID-19 crisis has on Spanish adolescents, exploring not only the factors that are related to more significant psychological effects but also to some variables that are associated with greater resilience.
2021-02-20T14:01:58.398Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "49141d6a8d635efe392576184568dd5e727eb7f5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/10/4/798/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "003bcf619d773b064575fd7540e71f6eeb021cd7", "s2fieldsofstudy": [ "Sociology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
237534326
pes2o/s2orc
v3-fos-license
Comparison between Kimura’s disease and angiolymphoid hyperplasia with eosinophilia: case reports and literature review Kimura’s disease (KD) is a rare chronic inflammatory or allergic disease. Angiolymphoid hyperplasia with eosinophilia (ALHE) is a benign vascular neoplasm. Their relationship has always been debated. This article reports two rare cases, one of each disease. One patient was a 48-year-old female that presented with a mass on her right mandible. She also had oedema erythema and wheals on her lower limbs. She was diagnosed with Kimura’s disease complicated with chronic urticaria. The second patient was a 23-year-old female that presented with multiple nodules of unequal size on the scalp. She was diagnosed with angiolymphoid hyperplasia with eosinophilia. The first patient recovered after being treated with surgical resection, glucocorticosteroids, cyclophosphamide and radiotherapy. The second patient underwent the first stage of surgical excision and is currently being followed-up. Comparison of the clinical and histopathological features of these two cases supports the theory that KD and ALHE are two separate disease entities. Introduction Kimura's disease (KD), also called eosinophilic hyperplastic lymphoid granuloma, is a rare chronic inflammatory disorder. 1 Angiolymphoid hyperplasia with eosinophilia (ALHE), also called epithelioid haemangioma, is a rare angioproliferative disease. 2 Their relationship has always been debated. These two conditions were first thought to be different stages of a spectrum in one disease due to some of their similarities. 3 With further understanding, scholars have found many differences between these two diseases and they are now accepted to be two distinct histopathological entities. 4,5 This current case report describes two patients, one with KD complicated with chronic urticaria and the other with ALHE. This work clarifies the similarities and differences between these two diseases by comparing their clinical and histopathological features. Case 1 In July 2018, a 48-year-old Chinese female presented to the Department of Dermatology, Huangshi Central Hospital, Affiliated Hospital of Hubei Polytechnic University, Edong Health Care Group, Huangshi, Hubei Province, China with a progressive mass on her right mandible over 3 months, which was accompanied by oedema erythema and wheals mainly on the lower limbs. She had seen dermatologists constantly for her rash but did not notice the mass. She reported no fever, night sweats or weight loss. She had no other medical conditions or respiratory symptoms and was otherwise systemically healthy. Her family had no genetic history. Upon physical examination, a nontender and non-erythematous palpable mass with no clear boundary was found in the right submandibular region (Figure 1a). Several enlarged lymph nodes were also observed in the right neck and supraclavicular areas. Wheals were scattered across the trunk and lower limbs, especially on the lower limbs ( Figure 1b). An enhanced computed tomography (CT) scan of the nasopharynx showed a soft tissue mass (1.9 cm  1.1 cm) in the lateral part of the right submandibular gland (Figure 1c). Chest CT imaging revealed small lymph nodes in the supraclavicular Figure 1. Clinical features of case 1, a 48-year-old Chinese female that presented with a progressive mass on her right mandible that was accompanied by oedema erythema and wheals mainly on the lower limbs: (a) a palpable mass that was nontender and non-erythematous, was observed in the right submandibular region; (b) wheals were scattered on her lower limbs; (c) enhanced computed tomography imaging of her nasopharynx showed a soft tissue mass (1.9 cm  1.1 cm) in the lateral part of the right submandibular gland. After surgical resection, the patient underwent four courses of treatment that included the following: 500 mg/day cyclophosphamide intravenously on 1 day, then 100 mg/day prednisone orally for 4 days. The patient then received local radiotherapy (PGTVnd 42.4GY/20F') for 14 days after 2 days of drug discontinuation. Each course of treatment lasted 21 days. The patient was followed up for 6 months without relapse. Case 2 In March 2021, a 23-year-old Chinese female presented to the Department of Dermatology, Huangshi Central Hospital with multiple nodules on the scalp for 2 years, without pain and itching. She denied any history of head trauma and had no other medical conditions or relevant genetic history in her family. Upon physical examination, numerous skin coloured nodules measuring approximately 0.5-3 cm ( Figure 3a) were found on the occipital area ( Figure 3b). No enlarged superficial lymph nodes were observed. Blood count and hepatic and renal function were normal. A biopsy of the lesion showed many irregular hyperplastic vessels with hypertrophic endothelium in the dermis; and endothelial cells protruded into the lumen and assumed a spiked appearance ( Figure 4a). The vessels were surrounded by a density of lymphocytes and a small number of eosinophils (Figure 4b). The patient was diagnosed with ALHE. Owing to the presence of numerous lesions, staged surgical excision of the lesions was recommended. The patient has already undergone the first stage of excision and will receive the second stage of excision after 3 months. She is now being followed-up. This report was approved by the Ethics Committee of Huangshi Central Hospital, Edong Health Care Group (V1.0.2021.03.15). Specific information on the patients was completely de-identified in the manuscript. Written informed consent was obtained from the patients for the publication of this report. The reporting of these two case reports follows the CARE guidelines. 6 Discussion Kimura's disease was first described by Kimura and Sceto in China in 1937 and was later studied by Kimura and Ishikawa in 1948. 7 Since then, the disease has become widely known as Kimura's disease. . Histopathological features of case 2, a 23-year-old Chinese female that presented with multiple nodules on the scalp without pain and itching: (a) many irregular hyperplastic vessels with hypertrophic endothelium in the dermis. Endothelial cells were protruding into the lumen assuming a spiked appearance (haematoxylin and eosin, original magnification  100); (b) the vessels were surrounded by a density of lymphocytes and a small number of eosinophils (haematoxylin and eosin, original magnification  400). The colour version of this figure is available at: http://imr.sagepub.com. ALHE was first described in 1969, when it was thought that the two diseases represented different stages of the same disease spectrum; with KD being the late stage of ALHE. 3 In 1987, KD and ALHE were found to differ in terms of histopathological features. 4 Many scholars subsequently supported this view. 5,8,9 Currently, KD is considered to be a rare chronic inflammatory or allergic disease and ALHE is thought to be a benign vascular neoplasm. 8 Kimura's disease is an inflammatory disorder of unknown aetiology that most commonly presents as painless lymphadenopathy or subcutaneous masses in the head and neck region. 10 Patients are typically males of Asian descent in their 30s and the sex ratio is 4-7:1. 11 In the clinical setting, one or multiple painless deep hypodermic subcutaneous nodules are usually observed, mostly in the head and neck and mainly in the subcutaneous tissues, salivary glands or cervical lymph nodes. 12 Associated lymphadenopathy has been reported in 42-100% of cases. 13 Blood analysis shows hypereosinophilia and total immunoglobulin E elevation. 11 The most important pathological feature of KD is hyperplastic follicles with germinal centres surrounded by abundant eosinophilic infiltrations. 1,13 Standard treatment for KD is currently not available. The main therapeutic methods are surgical resection, glucocorticoid therapy, cytotoxic therapy and radiotherapy. 1,7 Owing to its frequent recurrence, comprehensive treatment is often required. 1,14 Angiolymphoid hyperplasia with eosinophilia is a rare benign vasoproliferative disease of uncertain pathogenesis that frequently presents with single or multiple dermal papules or nodules on the head and neck. 15 A high incidence has been reported in female patients aged 30-50 years. 16 The histopathology of ALHE shows prominent vascular proliferation, enlarged endothelial cells that are cuboidal to dome-shaped, and sparse to heavy lymphocytic infiltrates with eosinophils. 17 The treatment of choice for ALHE is surgical excision, although relapse is common. 17 Other types of procedures, such as cryotherapy, 18 pulsed dye laser, 19 and carbon dioxide laser, 20 have been reported. Medical treatments include corticosteroids, topical imiquimod, tacrolimus and isotretinoin. 17 The clinical and histopathological features of KD and ALHE have been widely compared and the results are presented in Table 1. 5,8,9 A comparison between KD and ALHE for the two patients in this work are summarized in Table 2. Many differences in the clinical and histopathological features are noted between KD and ALHE. In the present two cases, some distinctions from previously reported characteristics were noted. Case 1 with KD was a female aged 48 years, which is beyond the typical age range of 20-30 years. 11 More interestingly, she had a complication of chronic urticaria. To date, no reports have been published about KD complicated with chronic urticaria. This finding may also serve as evidence that Kimura's disease is a chronic inflammatory or allergic disease. Case 2 with ALHE was a female aged 23 years, which is younger than the typical age range of 30-50 years. 16 More than 50 nodules measuring approximately 0.5-3 cm were observed. This result adds further evidence that ALHE is a benign neoplasm. In conclusion, comparison of the clinical and histopathological features between KD and ALHE demonstrated their broad differences. Thus, this work supports the opinion that KD and ALHE are two separate disease entities. Author contributions Ailing Zou: approval of the final version of the manuscript; drafting and editing of the manuscript; collection of data; critical review of the literature; Mengyao Hu: approval of the final version of the manuscript; collection of data; critical review of the literature; critical review of the manuscript; Bin Niu: approval of the final version of the manuscript; drafting and editing of the manuscript; intellectual participation in propaedeutic and/or therapeutic conduct of the cases studied; critical review of the manuscript.
2021-09-17T06:17:22.835Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "7d222803f741496334f6e5e2915c6a4069d657d8", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/03000605211040976", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "637210c40e22c97a4a3ffcd909f59607494b1eb6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233920994
pes2o/s2orc
v3-fos-license
Phenomenological Research Needs to be Renewed: Time to Integrate Enactivism as a Flexible Resource Qualitative research approaches under the umbrella of phenomenology are becoming overly prescriptive and dogmatic (e.g., excessive and unnecessary focus on the epoché and reduction). There is a need for phenomenology (as a qualitative research approach) to be renewed and refreshed with opportunities for methodological flexibility. In this process paper, we offer one way this could be achieved. We provide an overview of the emerging paradigm of post-cognitivism and the aligned movement of enactivism which has roots in phenomenology and embodied cognition. We argue that enactivism can be used as a flexible resource by qualitative researchers exploring the unfolding of first-person (subjective) experience and its meanings (i.e., the enactive concept of sense-making). Enactive approaches are commonly tethered to “E-based” theory, such as the idea that sense-making is a 5E process (Embodied, Embedded, Enacted, Emotive, and Extended). We suggest that enactivism and E-based theory can inform phenomenological research in eclectic and non-prescriptive ways, including integration with existing methods such as observation/interviews and thematic analysis with hybrid deductive-inductive coding. Enactivism-informed phenomenological research moves beyond methodological individualism and can inform novel qualitative research exploring the complex, dynamic, and context-sensitive nature of sense-making. We draw from our enactive study that explored the co-construction of pain-related meanings between clinicians and patients, while also offering other ways that enactive theory could be applied. We provide a sample interview guide and codebook, as well as key components of rigor to consider when designing, conducting, and reporting a trustworthy phenomenological study using enactive theory. Introduction The enactive approach (now commonly referred to as enactivism) is a theory of cognition that is grounded in the philosophical movement of phenomenology as well as the cognitive sciences (Varela et al., 1991). Enactivist frameworks typically view cognition as sense-making, referring to personal significance or meaning that a person generates or "enacts" by interacting in their environment. Since the 1990s, many strands of enactivism have developed and are now rapidly gaining popularity among philosophers and researchers studying sensemaking (for an introduction to some varieties of enactivism, see Ward et al., 2017). However, as recently highlighted (Fernandez, 2020;Zahavi & Martiny, 2019), many enactive concepts are rarely used in qualitative research despite their potential to offer novel insight into complex clinical phenomena, including the evolving sense-making of people living with challenging health condition(s) as they engage in healthcare. Further, existing qualitative research approaches under the umbrella of phenomenology are becoming overly prescriptive and dogmatic. Studies deviating from the approaches outlined by prominent qualitative researchers are being criticized and denied their "phenomenological" status (e.g., see Smith [2018] and van Manen [2017]). Philosophers have also joined these debates; see Zahavi's 2019c commentary regarding issues with van Manen's interpretations of phenomenology and its negative impact on qualitative research. Also see Zahavi's (2019a) commentary regarding Giorgi's questionable insistence on the use of the epoché and reduction. To us, these debates indicate a need for phenomenology (in the form of a qualitative methodology) to be renewed and refreshed, moving away from the overly complicated steps and dogma that are impeding the practical relevance of phenomenological qualitative research. This process paper offers a way to move in this direction. We start by outlining why qualitative research is vital to better understand the experiences of individuals living with complex, subjective health conditions. While doing this, we explore the concept of "subjectivity" as it is a central concept in enactive approaches to sense-making. We then provide an overview of the emerging paradigm of post-cognitivism and how it contrasts with traditional ways of understanding cognition (sense-making). Enactivism sits within the post-cognitivist paradigm and we outline the philosophical assumptions that generally come with enactive approaches to sense-making. We then argue that enactivism can be used as a flexible resource by qualitative researchers exploring the unfolding of first-person experience and its meanings (i.e., the enactive concept of sense-making). We draw from our own enactive work (Stilwell, 2020;Stilwell et al., 2020;Stilwell & Harman, 2019) that explored the co-construction of pain-related meanings between clinicians and patients, while also offering other ways that enactive theory could be applied in phenomenological research. Qualitative Health Research and Subjectivity When considering the top ten leading causes of years lived with disability, five are conditions with strong subjective elements (i.e., they are embodied, first-person experiences involving sense-making): low back pain #1, migraine #2, major depression #5, neck pain #6, and anxiety #9 (Hay et al., 2017). These subjective conditions contrast with conditions, such as irondeficiency anemia, that are diagnosed through laboratory investigation (e.g., blood hemoglobin value) rather than a patient's report of their experience. Qualitative research not only explores individuals' experiences, it also effectively identifies common concerns, preferences, and expectations that patients have about potential or received treatment. This can provide clinicians with an enhanced understanding of these factors, leading to healthcare that improves patients' experiences and outcomes. For example, qualitative research about persistent pain has included findings that patients feel dismissed and stigmatized, and offer suggestions about how care could be optimized (Bunzli et al., 2013;Holloway et al., 2007;Slade et al., 2009). Ultimately, qualitative research can inform humanistic approaches to the care of those who are suffering. Qualitative research on subjective conditions is essential due to epistemological constraints related to assessment in healthcare. Peoples' subjective experiences, such as pain, are from a first-person perspective; these experiences cannot be directly assessed or "seen" through scientific measurement or testing (Stilwell & Harman, 2019;Wideman et al., 2019). Therefore, the person with the experience of interest has an epistemic privilege; their qualitative narrative is the best available proxy for others to infer that they are experiencing the subjective condition of interest, such as pain (Stilwell & Harman, 2019;Wideman et al., 2019). In this sense, firstperson (subjective) experience is private. Of clinical relevance, a clinician cannot deny that a person is in pain (for example) when they say they are experiencing it. Nor can a clinician say that a patient is feeling pain when they deny it. At this point, it is important to unravel the word "subjective" as it is used in different ways and often creates confusion in interdisciplinary work. As outlined by de Haan (2020b, p. 85): "Experiences are of a 'subjective' structure . . . (they) are not subjective as opposed to objective, they are subjective as opposed to being views from nowhere." Further, as noted by Gallagher and Zahavi (2012, p. 21), some mistake phenomenology to be a subjective account of experience; however, they note that " . . . a subjective account of experience should be distinguished from an account of subjective experience." To try to avoid confusion, we align with the terminology commonly used by enactivist thinkers (Fuchs, 2020;Thompson, 2005). For current purposes, the most relevant terminology is the lived body (body as subject) and living body (body as object). In this sense, the body has a double status of being a "subject-object": a subjectively lived body and an objective living body (see Fuchs, 2020;Thompson, 2005). The lived and living body are part of the single person in an environment; therefore, as outlined by de Haan (2020a, 2020b), the lived body (subjective experience) cannot be reduced to the living body (physiological processes). Also, physiological processes do not simply "add up" to the lived body, nor can we fully "map" bodily processes (identified through third-person methods) onto the experientially lived body (de Haan, 2020a(de Haan, , 2020b. These ideas are reflected and further unraveled in Figure 1 and toward the end of the paper. As reflected in Figure 1, we can use some terms interchangeably, such as the lived body and subjective experience. Also, living body, body as object, or reference to physiological or biological processes. Now, Figure 1 can be considered in the context of pain as an example of a common subjective experience. Although there are pain-related physiological measures (e.g., quantitative biomarkers) and people often behave in certain ways when they are experiencing pain (e.g., facial expressions and bodily movement patterns), the experience of pain itself (i.e., the lived body) cannot be found through third-person biomedical investigations of the living body. Therefore, for people who are conscious and have the capacity to communicate, we rely on their qualitative narrative to gain insight into their experience of pain and how their sense-making changes as they navigate health services. Phenomenological qualitative research is well positioned to explore sense-making, which can inform ways to improve patients' experiences and outcomes. However, there is a need for methodological flexibility in phenomenological research to open up opportunities to conduct clinically meaningful research without having to follow overly complicated, confusing, and likely unnecessary procedures. In this paper we suggest that phenomenological research can be conducted in non-traditional ways, moving beyond individual interviews and a narrow view of sense-making that only focuses on what it is like to have an experience. We argue that enactivism offers new theoretical considerations and may help researchers better explore the complex nature of sense-making, while still being attuned to valuable concepts often found in phenomenological research such as embodiment, embeddedness, intersubjectivity, spatiality, relationality, and temporality. Post-Cognitivist Paradigm Qualitative researchers are expected to consider and report their philosophical paradigm (world view), methodology (sometimes called qualitative design), and methods (Creswell, 2013(Creswell, , 2014. Here we take time to outline post-cognitivism as enactive approaches sit within this paradigm and these ideas are not yet reflected in texts on qualitative methods. The postcognitivist paradigm is rapidly evolving and starting to take a coherent shape as authors declare the various philosophical assumptions it entails, separating it from some existing paradigms and merging it with others. To understand what has been referred to as the post-cognitivist paradigm (Lobo, 2019), we contrast it with cognitivism. A key feature of the traditional or classical cognitivist paradigm is that the mind/cognition should be understood through third-person analyses of the brain, downplaying the role of the body and context (Thompson, 2007). This contrasts with the post-cognitivist paradigm that emphasizes the importance of the full living body, context, interaction in the environment, and first-person experience (lived body). In the postcognitivist paradigm, cognition is broadly understood as sense-making that brings forth (enacts) experience/meaning from a concerned point of view. More specifically, Engel, (2010) outlined core assumptions of cognitivism and postcognitivism in relation to cognition. These assumptions are summarized in Table 1. Post-cognitivism builds on many lines of work, especially phenomenological philosophy as is apparent in the terminology in Table 1 (e.g., being-in-the-world, embodiment, and situatedness). Further, Engel referred to the divergence from cognitivism as the pragmatic turn, making reference to the action-oriented viewpoints of those who developed pragmatism. However, these same assumptions apply to what is now being referred to as the post-cognitivist paradigm (Lobo, 2019); therefore, we use this label in Table 1. That said, we do appreciate that post-cognitivism encompasses aspects of pragmatism (see Gallagher, 2017). We also appreciate overlap between the post-cognitivist paradigm and constructivism. It is important to note that working in the post-cognitivist paradigm does not negate or remove the role of sub-personal systems or mechanisms (Lobo, 2019). Instead, there is an attempt to take into account the role of the brain, the body, and the environment to generate a big picture view of sense-making (cognition) that is richer than the cognitivist view that the brain (mind) is essentially a data processing computer (Lobo, 2019). In other words, post-cognitivists argue that sense-making cannot be fully understood by only looking in the brain (centralist approach) or other tissues in the body (peripheralist approach). Rather, a more comprehensive approach is required to appreciate Fuchs (2011Fuchs ( , 2020 with inspiration from de Haan (2020a, 2020b). Note. The lived body (subjective experience) is non-reducible; it cannot be reduced to the living body (physiological processes) that can be investigated from a third-person perspective. Yet, as indicated by the arrows, there is influence (circular or organizational causality) between the lived and living body (we discuss this further, including integration of sociocultural influences, later in the paper. Also see de Haan, 2020b). While the living body (by itself) is not the unit of analysis for experience, it certainly limits or allows the types of experiences an individual can have. *It is important to note that the experience of others can only be inferred from a second-person perspective through interacting with them and witnessing their behaviors or verbal reports. how a person (with a body and brain) interacts with their environment in a particular situational context. While evolution, genetics, and bodily pathology certainly affect and set limits to the types of experiences humans have, in the post-cognitivist paradigm the first-person experience (i.e., subjective, lived body) cannot be reduced to a bodily process (e.g., objective, central or peripheral physiological processes) abstracted from the environment, context, and meaning. Throughout the rest of the paper, we include commentary on decision-making and examples from our exploratory enactive qualitative study of pain-related meanings (see chapter three in Stilwell, 2020) and our related work Stilwell & Harman, 2019). Working within the paradigm of post-cognitivism, we now outline the decision to integrate enactivism into our research, followed by the specific assumptions of enactivism. From Phenomenology Frustrations to Finding Enactivism To contextualize the decision to integrate enactivism into our qualitative research, we provide some background details. Early in his doctoral studies, the first author felt constrained by available qualitative research methodologies (i.e., narrative research, grounded theory, ethnography, case study, phenomenology) when considering the following: his alignment with what is now being referred to as the post-cognitivist paradigm; his specific assumptions about pain; and his desire to study patient-clinician interaction and the co-construction of pain-related meanings. The first author attempted to design a phenomenological study, but struggled when it came to making a decision whether to align with descriptive phenomenology (Husserl) or interpretive (hermeneutical) phenomenology (Heidegger and Gadamer). To better understand phenomenological concepts (e.g., epoché, bracketing, and the reduction) and connect phenomenology as a philosophy to phenomenology as a qualitative research approach, he began reviewing the work of van Manen who is highly cited among qualitative researchers. He began to note contradicting and confusing advice and felt uncomfortable with van Manen's unnecessarily complicated procedures and strong views as to what phenomenological research should entail. He felt interpretive phenomenology was the closest methodology aligning with his paradigm and research questions, yet it was missing key elements of interest that he wanted to apply to pain (e.g., contemporary aspects of embodied cognition and enactivism-described in more detail shortly). Further, he had concerns because aspects of his desired qualitative research endeavors were far from what is considered "proper" phenomenology according to prominent authors, such as van Manen and Giorgi as reflected in Zahavi's recent commentaries (Zahavi, 2019a(Zahavi, , 2019c). An exploration of the enactive literature led the first author to use enactive theory to guide his qualitative research as it is rooted in phenomenology and contained elements of interest that were not apparent in other phenomenological approaches. This was an unusual strategy as there was no clear alignment with well-known phenomenological qualitative research approaches. As we only briefly introduced enactivism in the introduction, in the next section we provide some historical information and the theoretical assumptions of enactivism that guided our work. Enactivism The connections between enactivism and phenomenology are apparent in the literature, making enactivism well positioned as a resource that phenomenological qualitative researchers can draw from. As pointed out by Thompson, Varela first thought of the name "the enactive approach" in the summer of 1986 when he started writing The Embodied Mind. This book (Varela et al., 1991) is credited as introducing the enactive approach, now commonly referred to as enactivism. Yet, before introducing the term "enactive," Thompson noted that " . . . Varela had been using "the hermeneutic approach" to emphasize the affiliation of his ideas to the philosophical school of hermeneutics-an affiliation also emphasized by other theorists of embodied cognition at the time (see Varela et al., 1991, pp. 149-150)." (Thompson, 2005, p. 423). Classical Cognitivism Postcognitivism Cognition is understood as computation over mental (or neural) representations. Cognition is understood as capacity of enacting a world. The subject of cognition is not engaged in the world, but conceived as a detached "neutral" observer. The subject of cognition is an agent immersed in the world (as suggested by the phenomenological concept of being-in-the-world). Intentionality is explained by the representational nature of mental states. System states acquire meaning by their relevance in the context of action. The processing architecture of cognitive systems is conceived as being largely modular and context-invariant. The architecture of cognitive systems is conceived as being highly dynamic, contextsensitive, and captured best by holistic approaches. Computations are thought to occur in a substrate-neutral manner. The functioning of cognitive systems is thought to be inseparable from its substrate or incarnation (embodiment). Explanatory strategies typically reference to inner states of individual cognitive systems. memory (Peeters & Segundo-Ortin, 2019), placebo effects (Ongaro & Ward, 2017), autism (De Jaegher, 2013), and clinical reasoning in both physiotherapy (Øberg et al., 2015) and psychiatry (de Haan, 2020a). Further, there are now many published books dedicated to advancing enactivism in different ways (Durt et al., 2017;Gallagher, 2017;Stewart et al., 2010), with some attending to more radical ideas than others (Hutto & Myin, 2013. Despite its roots in phenomenology, qualitative researchers rarely utilize the rich literature base of enactivism. Although enactivism is relatively new and still evolving as a movement, we suggest it is time for increased integration of enactivism into phenomenological research. Enactivism builds on and extends phenomenological considerations regarding the mind/cognition and has potential for increased methodological flexibility as compared to existing phenomenological qualitative approaches. Others (Di Paolo & De Jaegher, 2019) have also noted limitations when taking a purely phenomenological perspective (especially Husserl's descriptive phenomenology) and how phenomenology can be built upon by using enactive theory. With an enactive perspective, experience and meaning are not to be found in elements belonging to the environment/clinician or the internal dynamics of the person alone; instead, they belong to the relational domain established between the two (De Jaegher & Di Paolo, 2007). As outlined by Gallagher (2017), enactivist approaches to sense-making/cognition can be characterized by the background assumptions outlined in Table 2. Regarding Point Number 2 in the Table, to be clear, this is not appealing to subjective idealism; rather, enactive thinkers view the "world" in the sense of the meaningful experience that is always about or directed toward something-the world or umwelt that presents itself to each individual thanks to their sensorimotor repertoire (Thompson, 2007). Enactive research questions are along the lines of: why does something mean something, for someone, in a particular historical and interactive situation; and what is at stake for this person? (Di Paolo et al., 2018). However, it is important to acknowledge that there is still debate as to how enactivism relates to research (i.e., is it a philosophy, paradigm, research program, methodology?) (Gallagher, 2017). We suggest that enactivism can be used as a flexible resource. We used it as a way to conceptualize pain (Stilwell & Harman, 2019) and we used that work to help shape and test our exploratory enactivism-informed phenomenological study on pain/meaning (Stilwell, 2020). In the next section, we provide more details on the enactive theory that informed our phenomenological study. Sense-Making as an E-Based Process Inspired by enactivism (including "4E" cognition; see Newen et al., 2018), we proposed an enactive approach to pain, in that pain is a mode of sense-making and that sense-making is a 5E process: Embodied, Embedded, Enactive, Emotive, and Extended (Stilwell & Harman, 2019). The 5E process of sense-making is depicted in Figure 2 where each of the Es are interconnected and constitute sense-making. We have described each of the Es in detail elsewhere (Stilwell, 2020;Stilwell et al., 2020;Stilwell & Harman, 2019); however we provide a brief summary here. In general, embodied means that sense-making is only possible by having a body and that different modes of sensemaking are shaped by bodily processes and interactions. Gallagher (2017, p. 6). Enactivist Background Assumptions 1. Cognition is not simply a brain event. It emerges from processes distributed across brain-body-environment. The mind is embodied; from a first-person perspective embodiment is equivalent to the phenomenological concept of the lived body. From a third-person perspective the organism-environment is taken as the explanatory unit. 2. The world (meaning, intentionality) is not pre-given or predefined, but is structured by cognition and action. 3. Cognitive processes acquire meaning in part by their role in the context of action, rather than through a representational mapping or replicated internal model of the world. 4. Enactivist approaches have strong links to dynamical systems theory, emphasizing the relevance of dynamical coupling and coordination across brain-body-environment. 5. In contrast to classic cognitive science, which is often characterized by methodological individualism with a focus on internal mechanisms, enactivist approaches emphasize the extended, intersubjective, and socially situated nature of cognitive systems. 6. Enactivism aims to ground higher and more complex cognitive functions not only in sensorimotor coordination, but also in affective and autonomic aspects of the full body. 7. Higher-order cognitive functions, such as reflective thinking or deliberation, are exercises of skillful know-how and are usually coupled with situated and embodied actions. Embodiment includes both the lived and living body as reflected earlier in Figure 1 and encompasses well-known phenomenological concepts including spatiality, relationality, and temporality. Embedded means that an embodied person is always in and of an environment and that sense-making is shaped by a person's relationship and interactions with their physical and sociocultural environment. Enactive means that embodied, embedded people have a concerned point of view and are action-oriented; sense-making is shaped by possibilities for action and action-perception cycles. In other words, we perceive through bodily action or in terms of "what we can do." Enactivists commonly explain this theory of perception using the concept of affordances (Chemero, 2003;Gibson, 1977), which are possibilities for action shaped by the relation between a person and their environment. Emotive means that emotion/affect shape or "color" sense-making and we are directed to salient aspects of ourselves and our environment. Extended means that non-biological items and engagement with large-scale institutions (e.g., cultural, academic, scientific, legal, etc.) can be a part of or shape sense-making (see Gallagher, 2018b). Typically, enactivism is tethered to multiple "Es" and people often talk in terms of E-cognition, E-based theory, or E-approaches. Hutto and Abrahamson (forthcoming) use an effective analogy, suggesting that E-approaches are like a family where sometimes members do not get along with others. For example, some reject the idea that sense-making/cognition is extended but accept the other Es. These practices have led to different mixes of the Es in the form of 3E and 4E approaches. However, we view each of the 5 Es presented above as building on each other and connected and interpreted through an enactive lens. In the context of pain, our consideration of the 5 Es together is referred to as the enactive approach to pain (Stilwell & Harman, 2019). In other words, the enactive approach to pain represents the 5E family, with enactivism at the core. Just as we (Stilwell & Harman, 2019) advocate to not look at a single factor (e.g., just the brain or a body part) to explain experience, we believe enactivism-informed qualitative research should do the same. For example, when conducting research in the context of healthcare for conditions with strong subjective elements, we need to look at the individual AND the environment, including the broader context (e.g., talk to both the patient and clinician, review clinical and laboratory findings etc.). As done in our pain study (which we will discuss in more detail shortly), we suggest that the unit of analysis in enactive-informed phenomenological research is (at least) the individual with a particular experience with serious consideration of their context, including how each of the Es intertwine and shape that person's sense-making. We expand upon this in the following section, offering guidance as to how enactivism can inform qualitative methods. We also bring it together at the end of this paper in Table 3. We feel that enactivism/5E theory can be applied to many experiences and processes that researchers are interested in, not just pain as was the case in our research. Enactive Methods: Observation/Interviews and Thematic Analysis To best integrate enactive theory into phenomenological qualitative research, we suggest that a combination of observation and semi-structured interviews provide rich, mutually enlightening data. More specifically, we advocate for observation of real-time, real-life interactions between the person(s) with the experience of interest and their environment, including others who may act as scaffolding for the experience (e.g., healthcare provider or others playing an important role in an individual's sense-making). Also, to explore the extended aspect of sensemaking, we suggest exploring engagement with artifacts or what some call material actants (Ellingson, 2017;e.g., medical equipment, medications, assistive devices, tools, etc.). Here, it is important to note that while meaning can be generated in person-person interaction in a specific context (e.g., clinicianpatient interaction in a clinical setting), we must appreciate that individual meanings are qualitatively different. We discuss this further in the following sections. Sampling When integrating enactivism into phenomenological research, we suggest that a wide range of purposive sampling strategies can be used. Depending on the research question and population of interest, specific cases may be sought out where one or a group of individuals have specific experiences and characteristics. Alternatively, maximum variation sampling (Palinkas et al., 2015) may be used to explore common features of an experience (e.g., pain) across a group with varied characteristics. Sample size will also vary; a priori estimates and rationalizing "saturation" is difficult; therefore, we suggest consideration of information power (Malterud et al., 2016). With this approach the duration of observation and number of interviews will depend on the aim of the study, sample specificity, use of established theory, quality of dialogue/observation, and analysis strategy. Similar to traditional phenomenological qualitative research, a smaller sample size is expected relative to other approaches (e.g., study using grounded theory). Data Collection When conducting enactivism-informed phenomenological research, we believe that both observation and interviews are important as together they offer a way to investigate interaction and intersubjectivity between one or more people in a specific context (including embodied-enactive interaction with artifacts). However, the data collection approach may vary depending on the research question, participants, and the environment of interest. We also encourage researchers to take field notes which can be reviewed and incorporated into the analysis (described shortly). In our pain study (Stilwell, 2020), we audio-recorded clinical appointments between clinicians and their patients with low back pain, then interviewed each (clinician, patient) to explore individual perspectives, their thoughts about their interactions during the recorded appointment, and their past experiences with other clinicians and patients. This data collection approach allowed us to explore a range of interactions and situations, from relationship formation and breakdowns to relationship repairs and advances; each situation and context shaping patients' unique meanings and phenomenal experience. Similar to narrative research, we were able to piece together and explore patients' evolving narratives and sensemaking while also taking a phenomenological approach by asking what these experiences were like and exploring generated meanings. This is one example as to how enactive theory guided us toward taking an eclectic approach to explore the complexities and multifaceted nature of sense-making-moving beyond only focusing on what it is like to have a certain experience. Context-based, 5E interview questions stray from traditional phenomenological lines of questioning, especially those that follow Husserl-based, descriptive approaches focusing solely on the invariant structures of experience. Instead, we suggest that enactivism-informed interview questions have more in common with interpretive phenomenology that emphasize the importance of context and how we cannot simply study a phenomenon that is removed from background information (Conroy, 2003;Laverty, 2003). However, as we outline below, we do suggest drawing from Høffding and Martiny (2016) who have both enactive and Husserlian influences. Høffding and Martiny (2016) argue that subjectivity cannot be reduced to objectivity; we cannot dismiss or discredit subjective experience with second-or third-person data. Yet, different types of data can mutually inform each other (we come back to this point later in the paper). Further, Høffding and Martiny (2016) note that an exploration of subjectivity directly confronts us with the embodied, enactive, and embedded aspects of experience. Researchers may not need to always explicitly ask about each of the 5 Es to be given information that is 5E-rich (e.g., a question not directly asking about emotion may elicit a narrative about and imbued with emotion). We discuss further inspiration from Høffding and Martiny (2016) in the next section (Making Data Collection Fully Enactive). For those looking for a starting point when developing an enactive/5E-based interview guide, in the Supplemental Material we provide some sample pain-related interview questions directed at patients; these questions were informed by the E-based literature, Høffding and Martiny (2016), as well as phenomenological interview tips provided by Gallagher and Francesconi (2012). These questions can be adapted for other subjective conditions and contexts. That said, we also encourage researchers to draw from the enactive literature to generate their own questions. This includes drawing from theory that is rarely touched on in qualitative research, such as the extended mind thesis and its enactive development (see Clark & Chalmers, 1998;Gallagher, 2018b;Thompson & Stapleton, 2009). For example, consideration of the transparency constraint (see Thompson & Stapleton, 2009) can guide interview questions regarding when and how artifacts or material items (such as assistive supports, wheel chairs, prosthetics, and the many rapidly evolving health wearables) become intimately coupled to a person and play an important role in shaping their experience and engagement in the world. In our pain study (Stilwell, 2020), in addition to drawing from the interview-based resources described above, our interview questions were guided by enactive/5E theory found in Stilwell & Harman, 2019. With this, we explored patientclinician dynamics, the clinical context, and each patient's unique situation. This included discussions (with both clinicians and patients) regarding clinical findings and laboratory results (e.g., spinal imaging reports). While we explored both clinicians' and patients' culture, past experiences, incoming knowledge, and expectations-we focused especially on clinicians' pain-related explanations and clinician-patient interactions as potential scaffolding for patients' experience of pain and pain-related meanings. In the individual patient interviews, we aimed to better understand patients' evolving sensemaking, including their experiences of receiving explanations for their pain, prognosis, and treatment. This included enactiveinspired (Di Paolo et al., 2018) questions, such as what/why pain-related meanings were significant to them (patients) given current interactions with their healthcare provider, their past experiences (e.g., receiving pain-related explanations from other clinicians), and their expectations of the future. In the following section, we provide more in-depth interview considerations that have data analysis implications. Making Data Collection Fully Enactive Enactive qualitative research reveals the particular shape or manifestation of participants' experience, and this includes the researcher's participation in the process of sense-making. In other words, the unfolding of an interviewee's experience in qualitative interviews is not a reifying recapture of "objective," pre-reflective, past experience. Meanings are not always apparent to participants and new meanings can unfold through interview questions, participant reflection, and the elicitation of narratives. This aligns with the process described by Varela and Shear (1999) where non-conscious or sub-personal phenomena may be perceived pre-reflectively without people being consciously aware of them. Then, with prompting during an interview, shapes and manifestations of experience can surface while pre-reflexive phenomena unfold allowing the interviewee to then verbally describe it. In this sense, we also suggest borrowing from Høffding and Martiny (2016) which will now be briefly discussed. Høffding and Martiny (2016) state that some researchers might think that congruency is needed between an experience and its description. In other words, that they need to seek to capture (through data collection) a description of an experience that corresponds to the person's actual past experience. However, Høffding and Martiny (2016, p. 6) describe how this belief reflects a confusion between objectivity and subjectivity as it: " . . . presupposes that an experience is like any object-an apple, car or planet." They argue that considering an experience as an object will lead one to an approach in which the descriptions of experience can be final or complete, treated as static data subject to reliability or reproducibility. Our perspective, and in particular with regard to pain, is that a person's experience is not something to be objectified, but to be understood. It is dynamic and can change with reflection and exploration. Guided by enactive theory, the goal of interviews (and analysis) is not to verify the accuracy of participants' descriptions. Rather, as described above, the aim is to explore the shape or manifestation of experience/meaning-which is fluid and context dependent. By context dependent, we are referring to the changing situations that afford different modes of sense-making based on relations and interactions between an acting individual and their environment (physical and sociocultural). Like other situations, research interviews are in a sociocultural context and there is a co-generation of knowledge (Høffding & Martiny, 2016;Zahavi & Martiny, 2019). This co-generation of knowledge between the interviewer and interviewee has been described as "fully enactive" (Thompson, 2017, p. 43). Data Analysis After organizing the collected data, it is helpful to read the transcripts and listen/watch the audio/video files. This provides the opportunity to get a feel for the data and to generate initial overall impressions. Before coding, this also allows researchers to reflect on tone, silence, hesitation, and other nuances that may shape interpretation during data analysis. In addition to the post-cognitivist and enactivist assumptions detailed earlier, enactive data analysis can draw inspiration from the various sources that formed the foundation of enactivism, as well as contemporary enactive literature. In the analysis phase of our pain study, we drew from, among other resources, phenomenology (e.g., Heidegger, 1927Heidegger, /1996Merleau-Ponty, 1962), enactive-ecological psychology and psychiatry (e.g., Gibson, 1977;Fuchs, 2005), and the intersubjective-focused approach to enactivism called participatory sense-making (De Jaegher & Di Paolo, 2007). Further, Figure 2 in Stilwell and Harman (2019) was referred to throughout the analysis. As suggested by Zahavi (2019c), we believe that enactivisminformed phenomenological research can avoid abstruse and excessively complicated (unnecessary) phenomenological considerations and practices that are advocated in the qualitative methods literature. This may allow researchers to maintain relevance to their area of inquiry without getting weighed down in analysis processes that may confuse and muddy, rather than improve the clarity and relevance of qualitative research. Additionally, we suggest that pre-existing knowledge (preunderstandings) and use of theory should be harnessed, rather than contained as advocated by some qualitative researchers (see Zahavi, 2019a and 2019b outlining the debate and confusion regarding the use of bracketing, the reduction, and the epoché in qualitative research conducted by non-philosophers). We suggest that researchers integrating enactive theory into their phenomenological studies can draw from thematic analysis as it is a flexible method that is often incorporated into studies with varied methodologies. Thematic analysis is a heterogenous method; however, it is generally used to identify, analyze, and report patterns (themes) in data. Thematic analysis is widely accepted as a helpful method for " . . . examining the perspectives of different research participants, highlighting similarities and differences, and generating unanticipated insights" (Nowell et al., 2017, p. 2). Although thematic analysis has few structured prescriptions and procedures, we suggest borrowing from the hybrid deductive-inductive approach to coding and theme development described by Fereday and Muir-Cochrane (2006). An a priori codebook can be created using enactive theory, guiding deductive coding. During analysis, new data-driven sub-codes/codes can be inductively generated and incorporated into the codebook. Ongoing integration of enactive theory and associated empirical research can also generate new sub-codes/codes. An initial version of our a priori codebook from our pain study is provided in the Supplemental Material. Deductive groupings of codes (nodes in NVivo) can be set up for each of the Es (Embodied, Embedded, Enactive, Emotive, Extended). Within each code, sub-codes (i.e., child nodes in NVivo) can be created that explore particular aspects of the E-based construct. It may be helpful to include pertinent operational definitions to continually revisit during coding. Although sub-codes may fit under multiple Es, a best-fit approach can be employed that is subject to change as analysis progresses. For deductive coding, text/video is coded, allocating segments of meaningful text/video to the deductively derived codes and sub-codes from the codebook. As the project progresses, the codebook can be elaborated and refined; this is consistent with guides on developing codebooks, noting that this is often an iterative and teambased process and there is a need for the team to be comfortable with uncertainty as the research progresses (DeCuir-Gunby et al., 2011). When potentially valuable text is identified that does not sit well with existing codes and sub-codes from the codebook, they can be placed under a code titled "other." Memos/journaling can be used to constantly track evolving thoughts. New sub-codes can be generated when multiple similar segments of coded text appear or when a content area is deemed to be a relevant outlier that is worthy of further reflection/investigation. Throughout the data analysis process, the researcher(s) looks across the codes/ sub-codes and takes reflexive notes regarding connections and new insights. This provides the opportunity to inductively create new sub-nodes in the "other" category or under the Es. New non-E codes and subsequent sub-codes can be created if warranted. If two or more authors are involved in coding, regular meetings can be arranged to discuss coding, update the codebook, and come to a consensus regarding key themes (this was the case in our pain study). Early in data analysis, meetings can be especially helpful to discuss the reliability of the codes (Fereday & Muir-Cochrane, 2006). This was also done in our pain study, which helped ensure both authors were applying the codes in a similar manner. Researchers may want to take notes during these meetings; this is a form of audit trail (Nowell et al., 2017). Further, continued meetings throughout analysis can facilitate ongoing reflexivity, consideration of incoming perspectives, and shared interpretive analysis of the data. As well, shared analysis and regular meetings (as well as peer debriefing, external review, and auditing) may increase the credibility and dependability of the research (Nowell et al., 2017). Throughout the data analysis process, existing E-based theory and empirical data can be integrated into the perspective the data are coded with. Frequently reviewing E-based theory and research can stimulate new considerations of the data. This process can help identify potential influences/contextual factors or taken-for-granted influences that have been overlooked. In the section below titled "Sample Findings," we provide some findings from our study to demonstrate how the use of enactive theory shaped our interpretations and findings. In the later stages of the analysis, themes are generated (defined and named) that move beyond the individual codes. This process involves consideration of patterns and the ways the Es interact together to shape participants' assigned meanings and experiences. It is important to note that initially separating the Es is somewhat artificial; however, in our pain study it helped break up and organize data and forced us to consider how the Es were at play in our data. As analysis progressed and themes started to develop, we had better appreciation for relations between the Es and moved beyond the individual codes within each of the Es. In our pain study, we found that a clear separation of the themes was not realistic; therefore, we ordered and reported on them in a specific sequence-each theme building on the previous one(s). By the end of the last theme, there was an overall narrative about the entire data set that was specific to our research aim. Similar to Thorne (2020) we suggest that analysis does not simply stop at theme identification; rather, researchers should engage in critical reflection and further integrate theory to enhance insight and add value to the literature and the author(s) respective field(s). This process continues during the writing phase, and when sharing and discussing findings and their potential application. Write, Share, Discuss, and Reflect Systematically organizing and documenting the research process and decisions will help when it comes to disclosing and reporting study details to others so that they can judge its credibility, dependability, and confirmability (Nowell et al., 2017). During the analysis and writing process, it can be helpful to discuss and present preliminary results and challenges with others (e.g., colleagues, supervisors, conference attendees). This is a form of peer debriefing and a means of establishing trustworthiness. Further, it can provide the opportunity to consider the practical relevance of the findings/themes. When presenting themes and the overall findings, discussing with others and referring back to the literature can create a more robust narrative (Nowell et al., 2017). In the end, providing detailed findings (thick descriptions) can help others judge the potential transferability of the research findings (Nowell et al., 2017). In relation to our pain study, the first author presented preliminary findings at conferences and at an international philosophy summer school for doctoral students. Also, both authors presented findings and discussed enactive theory on podcasts, and during an online webinar and live question and answer. This all led to new considerations of the findings. For example, the first author sought out and incorporated additional literature; the concept of corporealization as described by Fuchs (2005) helped us better describe a theme through an enactive lens. We expand on this in the "Sample Findings" section below. Further, we highly suggest taking a fully embodied and enactive approach to knowledge translation. For example, in our pain study we worked with an artist/researcher to develop art pieces (see Stilwell et al., 2020) that reflected the painrelated metaphors we heard clinicians use with their patients. Our experience was that the use of art in presentations and writings helped audience members/readers connect with the work and underlying theory at a deeper level. Using art and connecting with others can also prompt further exploration of theoretical integration, as new ideas, inspiration, and literature may be identified and applied to the analysis. Preliminary themes can be sent to the participants, asking them to provide feedback if they wish. Guided by enactive theory, this "member checking" is not meant to validate static experiences-rather, it is a continuation of the conversation and an extension of the findings if participants choose to provide feedback. Even when the full manuscript is complete (and shared, published etc.), we suggest not considering it as something final; instead, it is a conversation that is to be built upon. General Overview As noted above, our pain study (see chapter three in Stilwell, 2020) was guided by enactive theory and we focused on clinical interactions involving pain-related explanations and diagnoses. Our analysis focused on the patients' sense-making in relation to their diagnoses and engagement in healthcare. Here we provide more study details. We audio-recorded appointments between clinicians (physiotherapists and chiropractors) and their patients with low back pain, then interviewed each (clinician, patient). Seven dyads (physiotherapists or chiropractors and their patients) were recruited, resulting in 21 recordings (7 appointments and 14 individual interviews). We identified four themes related to how clinical interactions and their contexts created affordance spaces (Gallagher, 2017) for patients' sense-making. Within our themes we found that pain-related metaphors were used bi-directionally and co-constructed between clinicians and patients, shaping patients' meanings. Patients' phenomenal experiences of integrating competing pain explanations ranged from validation and hope to frustration and anger. Clinicians' pain explanations either aligned or contrasted with patients' evolving pain narratives. This sensemaking process included inter-bodily touch and movement, anatomical models, and imaging findings. Often, patients were set up to view their bodies as flawed. In conclusion, our findings provided further insight into why and how disabling back pain is partly iatrogenic. Clinician-patient interactions guide the way patients attune to and engage in their environments, shaping perception and meaning. Of clinical relevance pertaining to patient (dis)empowerment and placebo/nocebo effects, we found that clinicians' taken-for-granted words and interactions can act as scaffolding for patients' meanings, shaping the experience of pain for better or worse. Utility of Enactive, 5E Theory Informed by the enactive concept of participatory sensemaking (De Jaegher & Di Paolo, 2007), we found that clinical interactions sometimes took on a life of their own-resulting in unintended meanings that could enrich or impede patients' therapeutic progress. Our focus on intersubjective clinicianpatient dynamics led us to non-dualist, non-reductive, and non-individualistic ways of understanding the generation of meaning and placebo/nocebo effects. In many situations, clinician's words and the use of anatomical models and imaging findings (X-ray, CT, MRI) formed strong emotive scaffolding for patients' pain-related sense-making. In some cases, spinal models and imaging findings (in conjunction with clinicians' pain-related explanations) appeared to negatively and dramatically change the way patients viewed their bodies and how they engaged in the world. In conjunction, we also observed the use of metaphors between clinicians and patients, many of which suggested that the body is a machine to be fixed. In many cases, this led to the body becoming (even more) the focus of the patient's attention in that it was corporealized; in the words of Fuchs (2005), the body became more of a burdensome obstacle-the body was opaque rather than transparent. A concurrent exploration of the E-based literature related to language and metaphor revealed the concept of enactive metaphor (Gallagher & Lindgren, 2015), which are metaphors that are acted out or brought into existence through action. Enactive metaphor is not a different kind of metaphor; rather, it is a way of engaging with metaphor (Gallagher & Lindgren, 2015). This literature had a significant impact on the first author's analysis and contributions to theme development. As a result, the concept of enactive metaphor was integrated into the coding scheme. Then, in the analysis it became apparent that enactive metaphor was often used unknowingly in the clinical interactions that we had recorded and other clinical interactions that the patients described. For example, the use of palpation, movement, and bodily feedback was used by clinicians to "show" patients so-called knotted, ropey, or tight muscles. Elsewhere , we describe how these types of clinical interactions involving enactive metaphor may be a powerful, yet overlooked learning mechanism that can shape patients' agency and affordances (i.e., their possibilities for action). We identified some situations where enactive metaphor was unhelpful as it linked the living body (body as object) to the lived body (body as subject) in overly simplistic and reductive ways (e.g., the patient was shown, through the use of touch and movement, that they have persistent pain because they have knotted muscles, slipped spinal discs, or no core stability-rather than considering pain as complex and multidimensional). Using enactive theory also helped us better understand how diagnoses can result in dual meanings in that sense-making could be simultaneously shaped in both positive and negative ways. In our study, many patients were relieved and felt validated when they received an explanation for their pain that they deemed to be credible and that aligned with their evolving sense-making. However, we found examples where these same explanations were simultaneously nocebic, unknowingly to the clinician and patient, relaying inaccurate views of pain (e.g., injury or nociception has a linear relationship with pain, tissues are not healed until pain dissipates, pain is permanent or not malleable, etc.). Without recordings of clinical appointments and interviews with clinicians, we believe that we would not have fully appreciated patients' contexts, including the specific diagnoses and pain explanations received and how these were often "lost in translation". Unintended negative meanings were often generated, shaping patients' sense-making in suboptimal ways. The use of enactive theory also directed us to consider the importance of action or action possibilities (i.e., affordances) in relation to diagnoses and pain explanations. In many situations, it appeared that pain-related explanations limited patients' affordances, leaving them in ineffective treatment programs and possibly compounding pain-related disability. Some patients viewed their bodies as flawed leading them to continue on a search for a fix and to invest time (and money) into long-term passive care, rather than engaging in guidelinebased recommendations, such as self-management. Our findings led us to describe enactive therapeutic approaches that may help clinicians avoid some of the suboptimal practices identified in our study. This demonstrates the depth of the enactive literature and how it can be used. For example, the idea of working to therapeutically "open up" or reconstruct a patient's affordance space is found in recent enactive literature (for example, see Gallagher, 2018a). Viewing rehabilitation in terms of shaping affordances is a novel way to approach care and aligns with contemporary, evidencebased views on rehabilitation and health (e.g., Buchbinder et al., 2018;Huber et al., 2016) that recommend focusing on patients' strengths rather than weaknesses, and guiding patients to adapt to achieve meaningful activities and goals that they have personally identified. When thinking in terms of a patient's affordances, we are simultaneously directed to consider both the patient's body (lived and living) and their environment which offers a range of invitations to act, depending on the patient's concerns, skills, and abilities. This leaves us to consider multiple data sources which offer rich insight into a patient's sense-making, yet also presents a challenge which we address in the next section. Addressing the Integration Problem Gallagher has outlined some of the challenges that arise when integrating enactivism into research initiatives and clinical practice (Gallagher, 2018a(Gallagher, , 2020. As detailed above, enactivism does not focus only on the brain, environment, or behavior; rather, there is focus on dynamics between the person and environment. This is a challenge as it is impossible to take into consideration all factors at once in a robust way. This same issue is apparent in interpretive phenomenology as indicated by van Manen: "to do hermeneutic phenomenology is to attempt to accomplish the impossible: to construct a full interpretive description of some aspect of the lifeworld, and yet to remain aware that lived life is always more complex than any explication of meaning can reveal" (van Manen, 1990, p. 18). We have to appreciate that all phenomenological research (regardless of the approach taken), requires an appreciation that full or final descriptions are unachievable and no single theme can completely unlock full meaning (van Manen, 1990). Enactivism also aligns with interpretive phenomenology's non-reductive approach; the end goal is not to reduce an experience to the sum of its parts, as this is impossible. Instead, there is an attempt to understand the expression of the whole, while still considering the parts (van Manen, 1990). Yet, an enduring challenge that we have struggled with, both clinically and academically, is how to best connect different types of data when investigating first-person experience (i.e., the so-called integration issue). As outlined by Thorne (2016, p. 138), pain researchers have long recognized the difficulties in fully reconciling " . . . the relationship between subjective and objective knowledge". For complex human issues such as pain, Thorne suggests that it shouldn't be an either/or situation regarding attending to either subjectivity or objectivity. She warns that we cannot simply draw conclusions about how people feel by observing how they behave, yet she also notes there are limitations to interview data. Similarly, Sandelowski (2002) has noted the seduction and limitations of just using patient interviews. Therefore, she argues that qualitative health research can benefit from supplementing patient interviews with observation, communication with clinicians, and review of medical records. This is what we did in our study (Stilwell, 2020) and it was prompted by our exploration of enactive theory. However, the integration challenge remains; how can we best integrate various data while fully respecting participants' experiences. Further, we have to appreciate the "blind spot" (Frank et al., 2019) of science: scientific knowledge is built on and filtered through the first-person perspective (i.e., experience has primacy; science cannot step "outside" researchers' lived experience). While different techniques have been proposed over the years to "triangulate" data to address the integration issue, de Haan's recent enactive framework (2020a, 2020b) for psychiatric conditions offers a promising solution. de Haan based her framework on the enactive, life-mind continuity thesis (Froese & Di Paolo, 2009;Maturana & Varela, 1992;Thompson, 2007) and outlined how there are four connected/continuous, yet non-reducible dimensions in the single person-environment system. These four dimensions are: physiological, experiential, sociocultural, and existential. Expanding on the description we provided in Figure 1, we can consider the connections between the four dimensions by considering circular or organizational causality, while maintaining that experience cannot be reduced to physiological processes (living body). Appealing to complexity theory and dynamical systems theory, de Haan refers to experiential processes as being more global, while physiological processes are more local. Further, she notes the asymmetry in global-to-local and local-to-global interactions: experiential processes necessarily include physiological processes (changes in experiential processes always include changes in some physiological processes); yet, not all changes in physiological processes involve or "add up" to changes in experiential processes (de Haan, 2020b). She suggests that clinicians and patients can collaborate to construct personalized network models that map out potential connections across the four dimensions and how this may be shaping a patient's health concern (de Haan, 2020b-see Chapter 8 for example models). We feel this type of approach is not just relevant for clinical practice, it could also be integrated into enactive-informed phenomenological research; the (2016) to depict the key elements in the enactive phenomenological research approach we presented in this paper. Note. The tip of the iceberg represents research methods as they are the more visible/tangible processes (i.e., the "doing" tools). Most of the iceberg (underwater) represents the less visible/tangible (theoretical) methodology and paradigmincluding guiding philosophical assumptions (i.e., the "thinking" tools). The underwater foundation (methodology and paradigm) provides the support for the tip (methods). As depicted in the figure, often delineation between methodology and paradigm is somewhat blurred. personalized network models could be constructed between the research team and the person(s) with the experience of interest. This leads us to, as de Haan argues, a convincingly non-dualist and non-reductive approach to exploring experience. For example, a personalized network model could contain: relevant lab and clinical exam findings (physiological dimension); the patient's self-reported experience, such as anxiety, pain, worry, etc. (experiential dimension); the clinician's diagnosis and advice/education (sociocultural dimension); and the patient's reflections and stance on the diagnosis (existential dimension). Then, connections could be drawn between the dimensions (for examples see Chapter 8 in de Haan, 2020b). Clinical diagnoses, observation of engagement with artifacts, and other information could be used to shape interview questions and elicit participants' reflections and meanings that may otherwise be taken-for-granted by participants and might not surface during interviews. With this, we can move well beyond the methodological individualism that is often found in phenomenological research. While we did not specifically draw out network models with the patients in our study, we encourage others to try this. However, we recognize that this is a challenging task and requires the researcher(s) to have the ability and expertise to collect data across the four dimensions and then use this information to inform qualitative interview questions and analysis. Yet, this is also a strength in that it could encourage the construction of more diverse, interdisciplinary research teams (i.e., involving qualitative and quantitative researchers, philosophers, clinicians, and patient partners). Rigor If readers of a study are unsure how the researchers analyzed their data or what assumptions informed their analysis, it is difficult to gauge its trustworthiness (Nowell et al., 2017). When integrating enactivism into a qualitative study (or when conducting any qualitative study for that matter), early consideration and use of the consolidated criteria for reporting qualitative research (COREQ) can promote the study's validity, transparency, and trustworthiness (Tong et al., 2007). For example, in our pain study we added a Supplemental File with the 32-item COREQ checklist-including additional study details. We also suggest reviewing and incorporating the thematic analysis rigor components outlined by Fereday and Muir-Cochrane (2006) and Nowell et al., (2017). We have discussed many of these rigor components in this paper, including: the importance of documentation and reporting, such as journaling and keeping an audit trail of decisions; shared analysis, peer debriefing, and regular meetings to increase credibility; and providing detailed findings (so-called thick descriptions) so that others can judge potential transferability of the research findings. Step Description Determine Unit of Analysis The unit of analysis is (at least) the individual(s) with the situation/experience of interest and serious consideration of their context-including each of the 5 Es. Sampling Purposive sampling, depending on population and phenomena of interest. Interview Guide Development Use enactive/5E theory to develop semi-structured guide. Data Collection Observation and interviews; however, this may vary. Data Storage & Organization Transcribe and organize files/documents (consider data management software). Establish the 5E codebook and a way of taking memos and documenting reflective thoughts during data analysis. 6. Review Files Read transcripts and review files (audio/video). Deductive Coding Code text, allocating segments of meaningful text to the 5E-based codes/sub-codes. Discuss code reliability if working in a team so codes are applied in similar ways. Inductive Coding Generate new, data-based codes (when appropriate). Add these to the codebook (discuss with research team). Discuss code reliability if working in a team. Integration Review E-based theory and research; add new codes/sub-codes (when appropriate). Discuss with research team and document decisions and thoughts. Theme Generation Reflect on patterns and interactions among the 5Es. Connect coded text across codes/sub-codes to create overarching themes. Define and name themes. Review the themes; incorporate external review and audits (if deemed to be appropriate). 11. Writing & Reflection To express findings, engage in writing and reflective practices. This can include embodied-enactive practices such as using art to represent findings. Take a critical approach and further integrate theory to enhance insight and add value to the literature and the author(s) respective field(s). Consider using the COREQ and review reporting elements that enhance thematic analysis rigor and trustworthiness. Discuss Findings Discuss and share findings with others (e.g., colleagues or conference delegates). This step may include member checking, as appropriate. This step may prompt further discussion and thematic refinement. Re-write Write and re-write to produce a report (e.g., manuscript). 14. Share Disseminate/publish report. Consider ways to share reflection pieces that convey the findings in more accessible ways (e.g., share and discuss art). Note. Although we outline the use of 5E theory, Researchers may have different philosophical commitments and choose to ground their work in 3E or 4E theory. Summary There is a need for phenomenology (as a qualitative research approach) to be renewed and refreshed with opportunities for methodological flexibility. In this paper, we argued that one way to achieve this is by using enactivism as a flexible resource. We outlined how enactivism sits within the postcognitivist paradigm and gave examples of ways that enactivism could be integrated into existing qualitative methods. Figure 3 visually depicts the paradigm, methodology, and methods we discussed. When integrating enactivism into phenomenological research, as visualized in Figure 3, we can call this methodology "enactivism-informed phenomenology," or more broadly "an enactive phenomenological approach" when referring to both methodology and methods. In Table 3 we summarize one way to conduct enactive phenomenological research. Although 14 steps are presented in sequence, this is not a linear process-many of the steps are iterative in nature and can be modified or adapted. Conclusion In this process paper, we presented enactivism as a flexible resource that can be integrated into phenomenological research. We provided examples from our own study that integrated enactivism into existing methods (observation/interviews and thematic analysis with hybrid deductive-inductive coding guided by E-based/5E theory). This approach to qualitative research offers novel ways to explore conditions and situations with a subjective element (i.e., experiences of living with a health condition such as pain, depression, or anxiety and engaging in health services). Integrating enactivism into qualitative research is still in early stages. We encourage qualitative researchers to explore what we have presented, and we welcome collaboration with philosophers and qualitative researchers to help refine and adapt these ideas.
2021-05-08T00:02:36.702Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "db5a2f26b4229537a2038751ab0af22d33ab43a2", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1609406921995299", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "f89d7fd3484c0f563bbcb7a937e9ae19b857651a", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Sociology" ] }
239633798
pes2o/s2orc
v3-fos-license
On the Necessity of Convergence of Chinese Accounting Standards towards Internationalization At present, the trend of economic globalization is in full swing, the trade exchanges between countries around the world are deepening, and the international financial capital market is booming. At the same time, the world’s scientific and technological revolution is changing with each passing day, and the productivity level of each country has developed rapidly, thus driving the rapid growth of the world economy. In this case, if the accounting standards of countries around the world, which reflect the processing means of economic information, are unable to converge with the international community, in the long run, It will inevitably lead to great international trade barriers, which will make the transaction costs remain high, and the transmission of key economic information lags behind slowly, eventually resulting in unnecessary waste of means of production, thus making it difficult to promote the coordinated progress of the economies of various countries efficiently. Therefore, in order to establish a good financial capital market order, maintain a stable and positive world economic level, and improve the happiness index of people all over the world, it is particularly necessary to call on all countries in the world to build international convergence of accounting standards. As the mainstay of world trade, China is obliged to improve its own accounting system and adapt to the global economic development. Therefore, its accounting standards will strive to converge with internationalization in the future, which is not only just needed by China’s own economic development. At the same time, it is also of great practical significance for the development of the world economy. However, due to the influence of specific factors such as national conditions, economic environment and historical issues, the internationalization route of China’s accounting standards has a long way to go. Based on this, we should rationally analyze the background and initial intention of the convergence of China’s accounting standards to international accounting standards, and deal with differences and consequences according to China’s accounting treatment and international standards brought by specific business environment. Then, proceeding from China’s national conditions, combining with the differences in the above accounting standards, objectively analyzing the problems and the deeper reasons behind the internationalization of China’s accounting standards by combining quantitative and qualitative methods, finally, prescribing the right medicine, proceeding from reality, taking the basic principle of Marxism-materialist dialectics, and realistically making targeted suggestions on the internationalization convergence of China’s accounting standards, aiming at making a modest contribution to the academic development of accounting standards by taking China as a reference. Introduction International Accounting Standards (IAS) was issued by the International Accounting Standards Committee (IASC) from 1973 to 2000 and revised by the International Accounting Standards Board (IASB), which standardizes international transactions and has strong reference value. However, the earliest accounting system regulation in China was the Accounting Standards for Business Enterprises issued by the Ministry of Finance in 2006, so in fact, Compared with other developed countries, China's financial capital market started late, and there is a certain gap between the scale of commodity economy and that of developed countries. Therefore, to some extent, the accounting system formulated by China in the early days to restrain and standardize financial workers has already used international accounting standards for reference, and at least for China's national conditions at that time, it has been relatively close to international standards. However, in recent years, China's economy has made rapid progress. The original accounting standards have lagged behind to some extent. Therefore, in 2014, the Ministry of Finance issued eight new accounting standards, such as fair value and presentation of financial statements. In 2017, it overhauled the standards such as government subsidies, recognition and measurement of financial instruments, transfer of financial assets, hedge accounting and income. In 2019, it adjusted the debt restructuring standards again. These adjustments and revisions made a solid step towards international accounting standards. Especially, the five-step method of revenue recognition in China's accounting standards is almost a translation of international accounting income standards. Therefore, it is not difficult to see that the improvement of economic level has led to the gradual improvement and systematization of accounting standards. Therefore, with the further development of China's economy in the future, China's accounting standards need to take a firmer step towards international convergence. Background of Convergence of Chinese Accounting Standards towards Internationalization The internationalization and convergence of accounting standards, in my opinion, refers to a country's accounting standards, which is recognized and observed by the people, while maintaining its specific www.scholink.org/ojs/index.php/jepf Journal of Economics and Public Finance Vol. 7, No. 4, 2021 91 Published by SCHOLINK INC. national conditions and characteristics, seeking common ground while reserving differences, from the perspective of national economy and people's livelihood, referring to international accounting standards, and achieving a certain level of relative agreement between domestic accounting standards and international accounting standards in terms of financial and accounting terms, which is formed after integration. It has distinct guidance and auxiliary value for economic management activities and is relatively universal. In the current world economic structure, as a country with a large economic volume, it is of far-reaching significance for China to efficiently promote its own economic development level, and then indirectly undertake the mission of a big country and improve the world economic level. However, the reality is not optimistic. China is facing unprecedented opportunities and challenges. After entering the 21st century, the tide of economic globalization is vast and the construction of the Community of Shared Future for Mankind framework is also the general trend. The rapid development of science and technology revolution has brought about great changes and impacts on traditional industries. These complex and diversified external backgrounds are called unprecedented changes in the past hundred years, which have brought certain pressure and promotion to the process of internationalization and convergence of China's accounting standards in relevant economic aspects. On the other hand, from China's own level, after the opening of the 14th Five-Year Plan, The supply-side structural reform of the joint development of real economy and virtual economy has been implemented. With the strong promotion of economic construction, the higher standards and standardization of accounting standards are expected. Under the dual promotion of internal and external background, if accounting standards can converge towards internationalization, China aims to promote the further rapid development of the global economy, thus improving the happiness index of people all over the world. The good wish to improve the unfavorable situation of poverty and backwardness in some countries and regions is just around the corner. Therefore, by objectively analyzing these environmental backgrounds, the author aims to help better understand the original intention of the convergence of accounting standards to internationalization, and then improve the enthusiasm of the reform movement from the general strategic level. The Background of Economic Globalization After the Second World War, the global economic growth slowed down, and even some countries and regions experienced a depression of zero growth or even negative growth. At the same time, they were faced with the potential impact of international risks that could not be ignored. The world financial capital market became more and more turbulent and faced with unexpected difficulties. Taking the EU as an example, different countries and regions began to move towards economic union, and their trade exchanges accounted for a considerable part of the economy. In other words, The wave of economic globalization has come, which has brought great influence and impact on the economic management of various countries, because the accounting systems adopted by various countries are different in dealing with key joints, and there is a certain degree of resistance in foreign trade practice, and even some unfavorable situations such as trade disputes have arisen, which seriously damaged the healthy and characteristics. An accounting system that is relatively in line with international accounting standards and can be recognized by domestic financial and economic managers, thus promoting the smooth and orderly trade, and is accepted and recognized by trading countries. Community of Shared Future for Mankind Background Accounting standards are an important means to assist economic management activities, and the source of the rapid development of the world commodity economy is essentially the value medium of commercial credit based on money, which promotes the prosperity and development of residents' barter exchange. Therefore, the economy is mainly to create convenience and well-being for people in the world, and indirectly, the standardization of accounting standards is also closely related to residents' people's livelihood to adopt an accounting system that is in line with international standards, which is especially important for stabilizing the financial systems of various countries and establishing a good social environment for commercial credit. On the other hand, under the framework of Community of Shared Future for Mankind, the economic development of all countries in the world is closely related, and if it is difficult to achieve close communication with other countries, it will bring unpredictable adverse consequences. In short, in recent years, more than 100 countries and regions have actively adopted the system of convergence with international accounting standards for trade. In this environment, abandoning convergence with international accounting standards is undoubtedly an irrational act of standing still, building a car behind closed doors and being arrogant. Therefore, the convergence of accounting system towards internationalization is a historical trend and irreversible. It is very important to correctly view and actively build international convergence accounting standards for realizing a stable and harmonious environment for the national economy and people's livelihood and building a good Community of Shared Future for Mankind. The Background of Vigorous Development of the Scientific and Technological Revolution With the rapid development of the world science and technology level, China's productivity level has been effectively improved, and the economic development level is increasing day by day, while the economic foundation determines the superstructure. Accounting, as the main tool for recording economic business information, plays an important role in the superstructure, and its degree of influence by the economic level is self-evident, while international accounting standards, as an accounting standard recognized by the world economy, can effectively reflect the development of industrial economic level. Establishing and perfecting its own accounting standards with reference and trying to achieve convergence have obvious guiding and reference significance for its own economic governance level. In this case, establishing convergent international accounting standards has significant positive externalities for its own and even the world's economic level and the happiness index of the world's people. On the other hand, the prosperity of the world's scientific and technological revolution has brought about the situation that traditional industries are moving towards high intelligence and low cost, and resulting in further demand for financial integration in the future. Under this premise, Some accountants who have been replaced by intelligent finance have been freed to devote themselves to deeper financial data analysis and financial integration and other economic management work. The deeper work has also promoted the revision and innovation of accounting standards. In other words, the revision of accounting standards is derived from the continuous accumulation of practice. Therefore, for China, with the further development of economy, there is still a great demand for further updating and improving accounting standards in the future. China's Own Objective Demand Background Since China implemented the reform and opening-up system in the late 1970s, it has referred to and introduced the market economy system, and deeply integrated China's national conditions and characteristics, forming a socialist market economy system with Chinese characteristics with public ownership as the main body and multiple ownership economies developing together. This decision can be called a key historical turning point, and since then China's economic construction has taken off. In just a few decades, it has once achieved the economic level that the developed countries reached in the past few hundred years. It has attracted worldwide attention. At the same time, with the increasing economic scale, accounting standards, the language of economic signals, has experienced a historic process from scratch. The working level of financial workers in enterprises has also improved to a certain extent, which has greatly enhanced China's international status and world influence. However, with the rapid development of China's economy, there are not only favorite financial management paradigms in accounting work, but also some negative aspects inevitably emerge: the germination and breeding of financial fraud and financial fraud, the exposure of blind spots in internal control system and audit system, the huge gap of high-level financial workers and the excessive proliferation of low-level basic accounting workers, etc. Compared with the high standards and strict requirements of international accounting standards for accounting work, and the training and selection mode of international accounting regulations for financial workers, China still has a long way to go. Therefore, to a certain extent, it is also to ensure the harmonious and healthy development of the socialist market economy with Chinese characteristics, keep the Chinese economy vigorous and stable, further maintain and consolidate China's position in the world economy, and promote the improvement of comprehensive national strength. Accounting Standards in Dealing with Economic Business In fact, when the Ministry of Finance of China first issued the accounting standards-Basic Standards, it had fully considered the convergence with international accounting standards to a certain extent. Therefore, to further explore the deep convergence, it is necessary to consider the specific differences between Chinese accounting standards and international accounting standards in practical application, so as to provide relevant reference materials for analyzing the existing reasons and proposing targeted solutions. Specifically, there are 10 differences between international accounting standards and Chinese accounting standards in practice: fixed asset pricing model, capitalization of borrowing costs, non-monetary asset transactions, short-term investment, goodwill treatment of long-term equity investment, attribution of research and development expenses, government subsidies for assets, treatment of start-up expenses, income tax accounting and debt restructuring. Among them, the clauses that can most clearly reflect the differences between standards have guiding and reference significance for the construction of convergence of standards. The author thinks that the treatment of asset impairment reversal in the fixed asset pricing model can best reflect the differences between standards, and it is very important to deeply analyze its connotation. Accounting Standards for Business Enterprises No.8-Impairment of Assets promulgated by the Ministry of Finance of China clearly stipulates that once the impairment loss of long-term assets is confirmed, it shall not be reversed in future accounting periods. In contrast, IAS36 issued by the International Accounting Standards Committee clearly stipulates that an enterprise must determine whether there is any indication on the balance sheet date of each accounting period according to industry practice or management experience. The asset impairment loss recognized in previous years no longer exists or has been reduced, so the realizable amount of the asset can be further calculated. The increase of the realizable amount can partially reverse the accrued impairment provision, but the reversed value cannot exceed the original total impairment amount. The part of the increased value after asset evaluation, even if it exceeds the book value when no impairment is accrued, should not be regarded as the reversal of impairment, but only as the revaluation and appreciation of the asset. Obviously, when dealing with asset impairment, there is a huge difference between international accounting standards and China's accounting standards, and the main reasons for this difference are: in China's accounting practice, due to the large base of the book amount of fixed assets and certain subjectivity and regulatory gaps in judging the reversal conditions of asset impairment, at present, China's market economy level, compared with developed countries, has not yet reached the scale that can make assets reversed. Therefore, based on the consideration of accounting information prudence, in order to avoid the improper revaluation and appreciation of assets, and the information distortion caused by the situation that enterprises manipulate profits through asset impairment and return, resulting in huge deviation of financial information to the stakeholders of enterprises, the reversal of asset impairment needs to be strictly checked or even eliminated. On the other hand, once the conditions for asset impairment reversal are met and the reversal is carried out, It is necessary to disclose the relevant information in the report, the amount to be transferred back and the conditions and reasons for the transfer back. At the same time, if a single asset is transferred back, it is also necessary to describe the key information such as the nature and scope of application of the asset. To a certain extent, it requires higher management and maintenance costs, which does not meet the requirements of cost-effectiveness. In contrast, due to the extensive use and popularization of international accounting standards in more than 100 countries, the value of fixed assets has reached a certain scale. Moreover, due to the extensive radiation of international accounting standards, there are relatively perfect systems and rules in the regulatory review. Moreover, for a long time in the past, countries and regions that use international accounting standards to disclose statements have already had relatively perfect market economy level, and complicated financial information disclosure can meet the needs of a wide range of stakeholders. International Accounting Standards The purpose of convergence of international accounting standards is to safeguard the reasonable economic interests of all countries in the world, but convergence does not mean complete equivalence. There is no doubt that there are certain levels of differences and differences in each country due to the constraints of political history, national cultural traditions and objective conditions. Therefore, it is necessary to keep and safeguard the specific interests of its own country. Therefore, the scale often needs to be carefully considered. Moreover, International accounting standards were born with the continuous prosperity of commodity economy, which involves the integration and absorption of the soul of accounting policies of various countries in the world economy, and has undergone strict and thorough auditing, internal control and other auxiliary preconditions. Therefore, its content is particularly huge. However, although China's current economic level is developing well, it must be affirmed that China is still a big developing country, and there is still a certain gap with the commodity economy of developed countries. Under this premise, it is still difficult to fully digest and absorb the spiritual outline of international accounting standards, and then strive to achieve convergence. Rapid convergence in a short time requires higher cost pressure. Therefore, to sum up, it is a long way to go to truly implement convergence of international accounting standards. The Economic Environment of Different Countries is Quite Different, and China is no Exception The construction of convergence of international accounting standards needs to coordinate a large number of countries and economic organizations. In fact, these different economies are affected by different environmental factors and have their own unique development characteristics. However, due to the differences in economic environment between China and these countries, while considering the internationalization of domestic accounting standards, they will inevitably be affected by accounting standards formulated under the special national conditions of other countries. On this premise, Rapid change often has an unpredictable impact on the inherent long-term production and operation mode, and even leads to the collapse and failure of the economic system. It is necessary to thoroughly analyze, compare and measure whether these special clauses are in line with China's national conditions and needs, and whether they can protect China's vital rights and interests. Therefore, under this premise, it is necessary to take a long-term view on the road of convergence. The Thinking of Some Enterprise Managers on the Construction of Accounting Standards Needs to be Further Improved In There is a Certain Gap in the Accounting High-end Talent Team China's accounting standards converge to the international accounting standards, and then formulate the system norms in line with the national conditions, and then apply them in the follow-up economic practice. In the final analysis, this work is done by higher-end professional accountants. However, due to differences in national conditions, seeking common ground while reserving differences, digesting and absorbing the contents of international accounting standards is a great test for accountants' own professional knowledge and professional ability, and with the technological level greatly promoting productivity, With the rapid development of industry, economic business is becoming increasingly complicated. On this basis, it is difficult to deal with economic business correctly, choose appropriate value scale, give reasonable subject attribution, and carry out systematic macro deployment of taxation and auditing. Moreover, the application of accounting theory needs to be continuously run-in with practice for a long time, so even if the work of convergence international accounting standards is completed, It is often difficult to appoint ideal accountants for skilled business practice in a short time. In addition, a certain number of uncontrollable risk factors restrict the construction of convergence of international accounting standards. On the other hand, China's enterprise forms are numerous and complicated, including a considerable number of listed companies, but there are also a large number of small and micro enterprises and individual industrial and commercial households that can not be ignored. These relatively basic enterprise organizations inevitably constitute a gathering place for middle and lower-level accountants in the economy and society by virtue of their relatively simple accounting work mode. In other words, the basic enterprise organizations also lack high-level accounting talents. With the expansion of their business scale in the future, The demand for high-level accounting talents is also foreseeable in the future. However, if faced with the complex requirements of standards after convergence, low-level accounting personnel are often unable to shoulder the burden of enterprise management. In the long run, the future and destiny of enterprises are worrying, let alone practice the standards after convergence, which undoubtedly brings huge obstacles to the implementation of standards convergence. The Number of Low-level Accounting at the Grass-roots Level is Supersaturated Accounting standards need to be practiced by accountants. However, because there are a large base of low-level accountants at the grass-roots level in China, and in fact, it is difficult for the economy to digest such a large accounting army, the overall average treatment and salary of accounting groups are diluted. Even after the deepening and popularization of big data, artificial intelligence and cloud computing, grassroots accountants are facing higher unemployment risks, and the trend of reshuffle in the industry is obvious, while bad money drives out good money. To some extent, the practitioners of grassroots accounting in the future may be left behind. As the future development trend, the high-end accounting talents who practice the convergence of international standards need to be continuously educated by low-level grassroots accountants for further study. If the grassroots accounting work is neglected, the high-end accounting talents will also have a certain level of risk of withering and withering, not to mention the application value after the internationalization and convergence of standards, which cost a lot of money. As a result, the motivation for the convergence of standards has been greatly weakened. Methods and Measures to Promote the Convergence of International Accounting Standards According to the basic principles of Marxism, the world is material and material is moving, and China's accounting standards also belong to material. With reference to the law of mutual change of quality and the law of negation of negation, combined with China's specific national conditions and practical characteristics, we can see that with the sustained economic development, accounting standards will accumulate and evolve from experience and industry practices in economic management practice, and there will be restrictions and constraints on the past standards after the evolution. After further running-in with the old standards, innovative accounting standards with Chinese characteristics will be formed, which are of advanced guiding significance to the economic environment and converge with international accounting standards. However, in the process of evolution, it is still necessary to base itself on China's national conditions, draw certain boundaries, adhere to the mass line, and establish a higher cognitive level of accounting standards in the minds of the masses. In addition, in order to avoid a fault crisis in the accounting industry, On the basis of training the financial elites who meet the practical requirements of high-standard accounting standards, it is necessary to reduce the extremely poor salary in the accounting industry to a certain extent, so as to better meet the survival requirements of the vast number of grassroots financial workers in China. Under this premise, it can be condensed into a centripetal force and make the road of convergence of accounting standards smoother. Build up the Recognition Scale of International Accounting Standards in China Considering that China's accounting standards are moving towards international accounting standards, due to the influence of its own special economic environment, history, culture and political factors, China has no conditions to fully converge with international accounting standards in any case. Therefore, under this premise, it is necessary to maintain distinctive Chinese characteristics, otherwise, it is meaningless to study the convergence of standards and even bring serious economic consequences. Therefore, China needs to strictly practice the general policy of seeking common ground while reserving differences. On this basis, High-level experts and scholars need to objectively increase their learning and understanding of international accounting standards, and strictly construct their own recognition scale, so as to generate higher enthusiasm and enthusiasm for the construction of standards convergence, and further promote the efficient and steady progress of standards convergence. Only in this way can they bring considerable value to their legitimate rights and interests and the economic development of all countries in the world. Cognitive Level Due to China's inherent historical factors, in the minds of a considerable number of people, accounting is a simple bookkeeping tool, and accounting practitioners act as simple bookkeepers, which is undoubtedly very regrettable. In this case, efforts must be made to improve citizens' further in-depth knowledge and understanding of accounting business and accounting workers. Only when people's cognitive level is improved can the value of standards convergence be potentially and effectively promoted. Thus, the cost of popularization and publicity has been effectively reduced, so as to bring effective guidance and leading value to the in-depth development of accounting standards and the approach to international accounting standards. Reserve of accounting Workers With the rapid development of the scientific and technological revolution, the level of productivity has leapt rapidly. With the prosperity of commodity economy and financial capital market, the accounting work corresponding to economic management has become more and more diversified, complicated and specialized, and has been gradually transformed from a single financial accounting focusing on simple bookkeeping and reporting to diversified functions. For example, responsibility accounting, tax accounting, financial cost management, financial risk strategic planning, internal control and audit work, etc., which were rare in the past, appeared in daily accounting work. There is no doubt that international accounting standards, as a master reflecting the soul of advanced accounting work, involve a large number of complex and important accounting practices and theoretical knowledge, and non-high-end professional accounting personnel are incompetent. Therefore, if you want to accelerate the convergence of accounting standards towards internationalization, It is necessary to enhance the professional quality and core competence of accounting staff, such as incorporating international www.scholink.org/ojs/index.php/jepf Journal of Economics and Public Finance Vol. 7, No. 4, 2021 100 Published by SCHOLINK INC. high-end accounting certificate ACCA and Chinese CPA examination into the core ways of continuing education and training of high-end accounting talents, and further promoting the integration of industry and finance, and deeply implementing the qualification popularization of management accountants in order to reduce the learning barriers of professional accountants. To Improve the Level of Treatment of Accounting Practitioners, to Attract more Elite Talents to Engage in Accounting Work In China's social and industrial structure, the accounting work as a whole appears to be polarized in income level. The salary gap between financial personnel engaged in basic accounting work at the grass-roots level and high-end professional accounting talents is large, while there is a long transition period for accounting practitioners from the grass-roots level to the elite. If this salary gap persists or even intensifies, there may be only a handful of accounting practitioners in the future, and there may be no successors for senior accounting personnel. Therefore, In line with China's policy of achieving common prosperity, in order to cultivate more outstanding accounting talents, it is necessary to narrow the income gap to a certain extent in the future and improve the salary level of accounting practitioners. In this case, high-end accounting elites can get fresh blood, and the use value of internationalization and convergence of accounting standards can be highlighted and improved. Conclusion As an important part of economic globalization, China's economic strength and international influence are increasing day by day with its comprehensive deepening reform. However, it must be affirmed that China is still a developing country, and the gap between its market economy level and that of developed countries still exists, which is also an important driving force for its future internationalization and convergence of accounting standards. One of the important starting points of China's accounting policy and system reform is to promote the economic living standard and happiness index of the world's people. For this reason, although it is difficult to converge international accounting standards, with the continuous improvement of China's comprehensive national strength, the further expansion of the market economy and the further improvement of the reserve of high-end accounting talents, China will certainly be able to build advanced accounting standards with Chinese characteristics that converge with international accounting standards in the future.
2021-10-21T15:16:57.481Z
2021-09-10T00:00:00.000
{ "year": 2021, "sha1": "5b0de6ecb38a943769f244e037c6e3a99d3f3cfa", "oa_license": "CCBY", "oa_url": "http://www.scholink.org/ojs/index.php/jepf/article/download/4159/4605", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "60cd0a66fff73dc60d3e9c193d2c1dbcfc597d78", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
267583196
pes2o/s2orc
v3-fos-license
Chemical and microbiological insights into two littoral Antarctic demosponge species: Haliclona (Rhizoniera) dancoi (Topsent 1901) and Haliclona (Rhizoniera) scotti (Kirkpatrick 1907) Introduction Antarctic Porifera have gained increasing interest as hosts of diversified associated microbial communities that could provide interesting insights on the holobiome system and its relation with environmental parameters. Methods The Antarctic demosponge species Haliclona dancoi and Haliclona scotti were targeted for the determination of persistent organic pollutant (i. e., polychlorobiphenyls, PCBs, and polycyclic aromatic hydrocarbons, PAHs) and trace metal concentrations, along with the characterization of the associated prokaryotic communities by the 16S rRNA next generation sequencing, to evaluate possible relationships between pollutant accumulation (e.g., as a stress factor) and prokaryotic community composition in Antarctic sponges. To the best of our knowledge, this approach has been never applied before. Results Notably, both chemical and microbiological data on H. scotti (a quite rare species in the Ross Sea) are here reported for the first time, as well as the determination of PAHs in Antarctic Porifera. Both sponge species generally contained higher amounts of pollutants than the surrounding sediment and seawater, thus demonstrating their accumulation capability. The structure of the associated prokaryotic communities, even if differing at order and genus levels between the two sponge species, was dominated by Proteobacteria and Bacteroidota (with Archaea abundances that were negligible) and appeared in sharp contrast to communities inhabiting the bulk environment. Discussions Results suggested that some bacterial groups associated with H. dancoi and H. scotti were significantly (positively or negatively) correlated to the occurrence of certain contaminants. Introduction: Antarctic Porifera have gained increasing interest as hosts of diversified associated microbial communities that could provide interesting insights on the holobiome system and its relation with environmental parameters. Methods: The Antarctic demosponge species Haliclona dancoi and Haliclona scotti were targeted for the determination of persistent organic pollutant (i.e., polychlorobiphenyls, PCBs, and polycyclic aromatic hydrocarbons, PAHs) and trace metal concentrations, along with the characterization of the associated prokaryotic communities by the S rRNA next generation sequencing, to evaluate possible relationships between pollutant accumulation (e.g., as a stress factor) and prokaryotic community composition in Antarctic sponges.To the best of our knowledge, this approach has been never applied before. Results: Notably, both chemical and microbiological data on H. scotti (a quite rare species in the Ross Sea) are here reported for the first time, as well as the determination of PAHs in Antarctic Porifera.Both sponge species generally contained higher amounts of pollutants than the surrounding sediment and seawater, thus demonstrating their accumulation capability.The structure of the associated prokaryotic communities, even if di ering at order and genus levels between the two sponge species, was dominated by Proteobacteria and Bacteroidota (with Archaea abundances that were negligible) and appeared in sharp contrast to communities inhabiting the bulk environment. Introduction The long temporal and biogeographical isolation of the Southern Ocean from other seas contributes to the strong diversity of zoobenthic marine communities in this region, characterized by a high endemism rate (McClintock et al., 2005;Kersken et al., 2016;Costa et al., 2023).Porifera, covering about 55% of the substrate in Antarctic shelves, emerge as predominant in both abundance and biomass, with ∼390 described species (44% of the total species are endemic) (Downey et al., 2012).They play pivotal roles in benthic community dynamics, by providing heterogeneous threedimensional architectures and spatial complexity, along with being involved in benthic-pelagic coupling (Schiaparelli et al., 2003;McClintock et al., 2005;Kersken et al., 2016). Recently, Antarctic Porifera have generated increased interest from a microbiological perspective, highlighting their extraordinary plasticity as hosts of a diverse range of associated microorganisms with their surface and/or within their tissues (e.g., bacteria, diatoms, dinoflagellates) (e.g., Regoli et al., 2004;Rodríguez-Marconi et al., 2015;Cárdenas et al., 2018;Papale et al., 2020;Cristi et al., 2022).In a pioneering study using a molecular approach to describe the associated bacterial communities, Webster et al. (2004) first demonstrated that a significant portion of the retrieved diversity was sponge-specific.Subsequent investigations, including the characterization of bacterial isolates (e.g., Mangano et al., 2009Mangano et al., , 2014;;Papaleo et al., 2012;Caruso et al., 2018;Savoca et al., 2019), have seldom applied high-throughput molecular approaches to describe the microbiome associated with Antarctic sponges (e.g., Rodríguez-Marconi et al., 2015;Cárdenas et al., 2018Cárdenas et al., , 2019;;Sacristán-Soriano et al., 2020;Ruocco et al., 2021;Cristi et al., 2022;Happel et al., 2022).Overall, these studies have confirmed a high degree of host specificity, reveling differences from tropical and temperate sponges, such as the absence of Cyanobacteria and Poribacteria, and the dominance of Proteobacteria (mostly Alpha-and Gammaproteobacteria).Moreover, the microbiomes (including Bacteria, Archaea and Eukarya) of Antarctic marine sponges were found to be distinct from those of their temperate and tropical counterparts (Sacristán-Soriano et al., 2020).Functional insights highlighted that Antarctic sponge-associated prokaryotes play important roles in processes such as nutrient cycling, establishment of symbiosis, biosynthesis of secondary metabolites (including antibiotics), quorum sensing, alongside their involvement in the biodegradation of aromatic compounds (Steinert et al., 2019;Moreno-Pino et al., 2020;Papale et al., 2020;Cristi et al., 2022).Due to their filter-feeding habit, sponges not only recruit symbiotic microorganisms but also retain bacteria, phytoplankton, and organic matter for their sustenance, while potentially accumulating in their mesohyl tissues pollutants associated with water-borne particles.This makes them potential bioindicators of pollution (Perez et al., 2003;Stabili et al., 2008).Despite its remoteness, Antarctica is not exempt from anthropogenic contamination.In fact, it is even increasingly affected by human activities at local (including tourism and research) and global scales (e.g., due to cold condensation, global fractionation, and long-range atmospheric transportation) which may prejudice its environmental, scientific and historic values (Caruso et al., 2022a).Evidence of pollutants, such as persistent organic pollutants (POPs; Vetter and Janussen, 2005;Pala et al., 2023), heavy metals (Bargagli et al., 1996;de Moreno et al., 1997;Negri et al., 2006;Illuminati et al., 2016) and microplastics (e.g., synthetic microfibers; Corti et al., 2023), were seldom reported in Antarctic sponges.The fate of pollutants may be strictly linked to microorganisms (possessing genetic and biochemical capacities for the remediation of pollutants) which constitute the first step in the transfer of toxic compounds to higher trophic levels.It is expected that microbes associated with sponges may have to cope with the presence of contaminants in the host tissues.However, to the best of our knowledge, contamination level and prokaryotic community composition in Antarctic sponges have been never treated together in the same study. In this context, this study aimed at assessing the prokaryotic community composition, alongside the bioaccumulation of pollutants (including polychlorinated biphenyls, polycyclic aromatic hydrocarbons, and trace elements), in two Antarctic sponge species, namely Haliclona (Rhizoniera) dancoi (Topsent 1901) and H. (Rhizoniera) scotti (Kirkpatrick 1907) collected in the Thetys Bay (Terra Nova Bay, Ross Sea).Notably, H. scotti is a quite rare species in the Ross Sea; recently, it was found in the Thetys Bay after 114 years from its original description in 1907, and then re-described (Costa et al., 2023).The main objectives of the present study were: (1) to individuate potential differences between the two sponge species in terms of associated prokaryotes and bioaccumulation of targeted pollutants, also in relation to data obtained for seawater and sediment; (2) to preliminarily evaluate the possible relationship between pollutant accumulation (e.g., as an anthropogenic stress factor) and prokaryotic community composition in both sponge species. Sampling area Thetys Bay, located nearby the Italian Mario Zucchelli Station (MZS), is a small inlet extending 3 km from its inner to outer boundaries.It is connected to the open polynya waters of the Terra Nova Bay (Ross Sea).The sea bottom (10-50 m depth) features a mix of partial rocky and muddy cover, where the benthic vegetation is scarce (Calizza et al., 2018).The sea-ice dynamics exhibit a marked seasonality, with the absence of ice during the austral summer and an ice cover that generally last until December.This seasonal pattern strongly affects primary productivity and the development of phytoplankton blooms and, in turn, the food supply to benthic communities, with repercussions on their distribution (Pusceddu et al., 1999). Bay in 2018 (inner bay) and 2019 (outer bay) during the XXXIII and XXXIV Italian Expeditions to Antarctica (n = 9), respectively (Table 1; Figure 1).Fragments of sponge individuals were packed underwater in sterile plastic bags and stored at +4 • C for the transportation to the laboratory for preliminary processing (within 2 h after sampling).Sponge surfaces were rinsed at least three times with filter-sterilized seawater and dissected (Mangano et al., 2009;Steinert et al., 2019).Fragments of each specimen were then preserved at −20 • C for DNA extraction and chemical analyses (in aluminum foils), and in 70% ethanol for taxonomic classification (previously reported by Costa et al., 2023).The collection of sponges was previously authorized by the Programma Nazionale di Ricerche in Antartide (PNRA), conformably to the Antarctic Treaty legislation and the SCAR Code of Conduct for the Use of Animals for Scientific Purposes.A fragment of each sponge individual was deposited at the Italian National Antarctic Museum (MNA, Section of Genoa, Italy) under the MNA voucher codes listed in Table 1.Seawater and sediment samples were contextually collected in a 10 cm radius from the sponge individuals.For the subsequent microbiological analyses, seawater samples (between 1.5 and 2.0 L) were filtered on polycarbonate membranes (diameter 47 mm; 0.22 µm) and stored at −20 • C until processing, while sediment samples were directly stored at −20 • C in sterile containers.For the extraction of persistent organic pollutants, sediment samples were collected with an aluminum scoop, placed in aluminum foil (previously decontaminated by washing with acetone and subsequently with hexane), and then stored at −20 • C until analysis.For trace metal analysis, sediments were directly collected by using plastic containers, and then stored at −20 • C until analysis.Seawater samples were collected in duplicate using 25 L stainless steel bottles.The bottles were then taken to the laboratories of MZS where the extraction took place within 1-2 days from the moment of sampling (see below), with the aim of minimizing any changes in the analytes for effect of prolonged storage. Persistent organic pollutant analysis Among POPs, polychlorobiphenyls (PCBs) and polycyclic aromatic hydrocarbons (PAHs) were determined in sponge, water and sediment samples.However, due to logistic constrains, seawater samples were not collected in 2018 for POP extraction.The simplified workflow of the extraction procedure is reported in Figure 2. Chemicals Isooctane, n-hexane and acetone were Pestanal from Riedel-De Haen; double distilled water HPLC quality was from Fluka Analytical, Sigma Aldrich; the sodium sulfate was from Carlo Erba Reagents; TBA sulphite and 2-propanol were from Sigma Aldrich.The internal standard, used to determine the recovery of the sample treatment from the extraction to analysis, was obtained from L429-IS (Wellington Laboratories, deuterated PAHs) and 13 Extraction of sponge samples and extract purification Sponge samples were freeze-dried (dry sponge weight was on average 16% of total weight) and homogenized with the help of a mortar.A known volume (10 µl) of method standard (MET) was added to each sample (3 g) prior to extraction.A mixture (30 ml) of n-hexane/acetone 1:1 was added to perform a solidliquid extraction using an ultrasonic bath for 40 min at 45 • C. Once back to room temperature, the supernatant was recovered and two further extractions were carried out.The organic phases recovered from the three extractions were combined and reduced to a volume of 5 ml, by using a rotary centrifugal evaporator.After centrifugation (5,000 rpm for 30 min), the recovered solution was concentrated to ∼3 ml.The extracts were then purified on SPE cartridges after the construction of an elution curve.For all porifera samples, 5 ml of hexane were used to elute the sample through the SPE column.A procedural blank was also performed, using all the reagents and solvents foreseen by the procedure, in the absence of the sponge sample, following the same procedures described above.The results of the blank determination were used to correct sample measurements or to detect errors due to interference from contaminants present in the reagents.Three replicate tests were carried out per sample. Extraction of sediment samples and extract purification The total humidity of the sediment samples was determined by oven drying: the percentage of residual water present in the samples varied around an average of 23%. Sediment samples were left to thaw overnight in an ISO 5 clean lab, leaving them in the original aluminum containers.Then sediments were treated with a 2 mm mesh sieve and a known volume (10 µl) of MET was added to each sample (30 g) before extraction.After the addition of acetone (50 ml), a solid-liquid extraction was performed using an ultrasonic bath for 30 min at 60 • C. Acetone was used to facilitate the subsequent extraction action of hexane, increasing the wettability of the sediments.Once returned to room temperature, the supernatant was recovered and a 1:1 n-hexane/acetone solution (40 ml) was added to each sediment sample, carrying out a second extraction in the ultrasonic bath for 30 min at 60 • C. The two organic phases were combined and subsequently reduced to a volume of 10 ml, by using a rotary centrifugal evaporator.Samples were filtered with a PTFE filter (diameter 33 mm; porosity 0.45 µm) to eliminate suspended particles, after conditioning the filter with hexane, and subsequently reduced to ∼2 ml. To remove sulfur, we use an efficient, rapid, non-toxic, and nondestructive way for analytes of interest (Jersen et al., 1977); briefly, a solution was prepared by adding 3.39 g of tetrabutylammonium sulphite to 100 ml of double distilled water; subsequently three washes were performed with 20 ml of hexane each; the solution was then saturated with 25 g of sodium sulphite.The solution was placed in a dark glass bottle and stored at room temperature. To each extract (2 ml), 1 ml of 2-propanol and 1 ml of TBA solution were added, following vortexing for 1 min.Sodium sulphite was added until saturation.Double-distilled water (5 ml) was added, followed by a for 1 min stirring.The solution was left to FIGURE Scheme of the extraction procedure applied to seawater, sponge and sediment samples prior to be analyzed. rest for about 10 min to allow the phases (i.e., water and organic) to be sorted.After freezing overnight, the organic phase was recovered and the extracts were purified with Si SPE columns, activated with ∼2 ml of hexane.After sample recovery, 2 ml of isooctane were added and the samples were reduced, by using a centrifugal evaporator, to ∼1 ml in volume.Checked the volume for weigh, a spike of standard injection (INJ), equal to one hundred of the volume, was added to each sample, using a graduated micro syringe.A procedural blank was also performed, using all the reagents and solvents foreseen by the procedure, in the absence of the sediment sample, following the same procedures described above.Three test replicates per sample were carried out. Extraction of seawater samples and extract purification Before extraction, a known amount of the "method-standard" solution was added to the samples.The samples were immediately extracted twice with 20 ml of n-hexane using a custom-made extraction system (Zoccolillo et al., 2004).The two aliquots of organic phase were recovered and combined, and the volume of the extracted water was measured accurately.The organic phases were then stored in glass containers at −20 • C until their arrival in Italy.Once in the analytical laboratory, the extracts were treated with anhydrous Na 2 SO 4 immediately before the analysis.After recovering the solution, a solvent exchange was performed by adding 1 ml of isooctane and reducing the volume of the sample to about 1 ml in a centrifuge vacuum evaporator.Finally, a known amount of the "injection-standard" solution was added to the sample.A blank procedure was also carried out using pure water as a sample. Gas-chromatography/mass-spectrometry analyses Standard solution analysis was carried out to optimize instrumental parameters in TIC (Total Ion Current) mode.Then, an Multiple Reaction Monitoring (MRM) program was created: two different precursor ion-product ion transitions were chosen for each analyte, the most intense was selected as "quantifier, " while the other as "qualifier."Retention times and selected transitions are given in the Supplementary Table S1. The instrument used was an Agilent GC 7890B coupled with a MS 7010 triple quadrupole mass spectrometer, equipped with an Agilent ALS autosampler and an MMI (Multi Mode Injector) injection port, used in the solvent vent mode.The column used was an HP-5MS UI (95% dimethyl -5% phenylpolysiloxane, 30 m × 0.250 mm, film thickness 0.25 µm).The mobile phase was helium, with a flow of 1.2 ml/min, a pressure of 11,052 psi.For the cell of collision, helium was used with a flow of 4.0 ml/min, while nitrogen was used with a flow of 1.5 ml/min with a pressure of 10 psi.The injector was at an initial temperature of 85 • C with the split valve open for 0.53 min for solvent evaporation.Then the split valve was closed, and the temperature was increased to 300 • C at a rate of 600 • C/min for 10 min.The oven temperature for the analysis was set as follows: 70 • C for 3 min, 50 • C/min up to 150 • C, isotherm 2 min, 5 • C/min up to 310 • C. All analyzes were performed by injecting 2 µl of sample. The data system contains all the software required for calibration, GC/MS-MS spectra collection and data processing for qualitative and quantitative analysis.Several field blank samples were prepared in the clean laboratory at the Italian base in Antarctica with MilliQ-grade pre-extracted water and analyzed with the same MRM procedure used for water, porifera and sediment samples.The limit of detection (LOD) and the limit of quantification (LOQ) were calculated for each compound as three times and ten times, respectively, the standard deviation of the blank (calculated on seven replicate blanks).The range of LOD and LOQ resulted 0.0001-0.001and 0.0004-0.006,and 0.0001-0.04 and 0.0003-0.1 ng/L for PCBs and PAHs, respectively.The analytes were grouped according to the expected concentration range and the corresponding calibration curves were obtained based on the following eight concentration levels: -ACY, ACE, FLU, PHE, ANT, F, and Py at 0.1, 0.2, 0.5, 1.0, 2.0, 5.0, 10.0, 20.0 ng/L; -BaA, C, BbF, BkF, BaP, IP, and BP at 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0, 2.0 ng/L; -All PCBs at 0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.2 ng/L. All the calibration curves resulted linear in the observed concentration range, with a typically value of r 2 of 0.999, always better than 0.997. Trace metal analysis Trace elements were determined in sponges and sediment samples collected in 2018 and 2019.The total mercury (Hg) content was determined by a Milestone Dma-80 Direct Mercury Analyser following the US EPA Method 7473.All analyses were carried out in triplicate.Blank were run every 10 samples and to ensure the quality of the results, the certified standards ERM-CC018 (Contaminated Sandy Soil) and MESS-3 (Marine Sediment) were used as reference materials.Relative standard deviation (RSD) and accuracy evaluated by five replicate analyses were within 5%. Sediment (500 mg) and sponge (100 mg) samples were digested with inverse aqua regia (6 ml 65% HNO 3 + 2 ml 37% HCl, Suprapur grade) by using a Milestone Ethos Easy microwave platform.After acid digestion, the solutions were brought up to 50 ml with Milli-Q water and appropriately diluted before analysis.The concentration of a set of trace elements (Li, Be, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Sr, Mo, Ag, Cd, Sn, Sb, Ba, Tl, Pb, Th, and U) was determined by ICP-MS using a Perkin Elmer NexION 300X.The analytical uncertainty was evaluated by replicate analysis (n = 10) of the reference material Nist 2711a and ERM-CC018.In general, the accuracy was better than 10%.Precision values, as relative standard deviation, were better than 5% for Li, Be, Mn, Ni, Ag, Sn, Cd, Tl, Pb, Fe, and As, and within 10% for Co, Cu, Zn, Sr, Sb, Ba, Th, U, V, and Cr. Prokaryotic community diversity and composition DNA extraction Total DNA was extracted from sponge homogenates, sediment and filter membranes (for seawater) by using the Power Soil DNA extraction kit (MoBio Laboratories, Carlsbad, CA, USA) according to the manufacturer's instructions.The concentrations and purity of extracted DNA were checked by using a NanoDrop ND-1000 UV-vis spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA).The DNA extracted from each specimen was sequenced in triplicates. Amplification and sequencing of S rRNA genes Briefly the sequencing reaction was set up as follows using microbial DNA 2.5 µl (5 ng/µl concentration), forward and reverse primer 5 µl at a concentration of 1 µM and KAPA HiFi Hot Start Ready Mix (Roche Sequencing Solution, Milan, Italy) 12.5 µl.Polymerase chain reaction (PCR) was performed by the thermocycler Applied Biosystems 9700, following the program: 3 min at 95 • C, followed by 25 cycles of 95 • C for 30 s, 55 • C for 30 s, and 72 • C for 30 s, with a final extension at 72 • C for 5 min.The Agilent 2100 bioanalyzer with a DNA 1000 chip was used to verify PCR size.AMPure XP beads were used to purify the amplicon product from the free primers and primer dimer species.The DNA concentration of each PCR product was determined using a Qubit R 2.0 Green doublestranded DNA assay.Depending on the coverage needs, all libraries could be pooled for one run.The amplicons from each reaction mixture were pooled in equimolar ratios based on their concentrations.Bacterial 16S rDNA region V3-V4 was amplified using universal primers 341F 50-CCTACGGGNGGCWGCAG-30 and 805 R 50-GGACTACHVGGGTATCTAATCC-30 (Kozich et al., 2013), incorporated by a Nextera XT Index kit (Illumina, San Diego, CA, USA).Libraries were normalized based on fragment length and dsDNA molarity.The normalized samples were combined and processed in four sessions using a MiSeq platform (Illumina) and MiSeq reagent kit v3-600 for 2 × 300 paired-end sequencing at the IGA Technology Services Srl (Udine, Italy). Post-run analysis Raw sequences were quality-checked using the FastQC tool (Brown et al., 2017).All subsequent analysis steps (quality filtered, trimmed, de-noised, merged, cleaning, and affiliation) were performed using the R package DADA2 (Weißbecker et al., 2020) to infer amplicon sequence variants (ASVs), i.e., biologically relevant variants rather than an arbitrarily clustered group of similar sequences.Particularly quality and cleaning steps were as follows: minimum length between 150 and 140 bp, no reads with N base were maintained in the analysis, and all sequences were trimmed at the ends after quality control, trimLeft = 17, trimRight = 15.During the analysis, filters for reducing replicate, length, and chimera errors were also applied.Bacterial taxonomy annotation was performed using Silva database formatted for DADA2, offering an updated framework for annotating microbial taxonomy (silva_nr99_v138.1_wSpecies_train_set.fa.gz and silva_species_assignment_v138.1.fa.gz).Finally, a manual inspection was done. Statistical analyses Diversity indices, namely Shannon diversity index (H ′ ) (Shannon, 1948), Simpson (Simpson, 1949), Chao1 (Chao and Jost, 2012), Fisher (Fisher et al., 1943) in each sample were calculated by R package phyloseq with the estimate richness_function.Principal component analysis (PCA) was performed to compare the bacterial community compositions across groups of samples and was obtained using the factoextra R package, on data from bacterial phyla.The results of relative abundances were transformed and processed by calculating the Bray-Curtis similarity.The pollutants retrieved concentrations with retrieved bacterial phyla were then used for the Spearman correlation matrix calculation and plotting (R package corrplot), by considering results statistically significant when p < 0.05.The entire dataset (including chemical analysis and bacterial abundance at genus level) was used to perform a PCA showing the spatial distribution of samples according to their distance, and the correlation with related factors (Pearson correlation factor >0.9). Determination of POPs and trace metals The entire dataset, including standard deviations, obtained by the determination of POPs and trace metals is reported in the Supplementary Table S2. In 2019, higher concentrations of PCB congeners were again detected in sponges than in sediment samples, with values that resulted higher than those determined in 2018.A number of congeners were particularly enriched in H. dancoi with respect to H. scotti, as follows: PCB89 + PCB101 (218.3 and 68.0 pg/g, respectively), PCB28 + PCB31 (208.6 and 149.5 pg/g, respectively), PCB52 (150.3 and 135.6 pg/g, respectively), PCB110 (192.5 pg/g in H. dancoi; absent in H. scotti).In water and sediment samples the overall PCB concentration was estimated to be about 10 pg/L and pg/g, respectively (Figure 3B). Polycyclic aromatic hydrocarbons In 2018, as it was reported above for PCBs, PAHs were more concentrated in sponges than in sediment samples (being an order of magnitude higher).Phenanthrene (1,555.9 and 1,708.3pg/g in H. dancoi and H. scotti, respectively), pyrene (2,744 and 4,083.6 pg/g in H. dancoi and H. scotti, respectively) and crysene (3,673.9 and 1,936.5 in H. dancoi and H. scotti, respectively) were particularly enriched in both sponge species (Figure 4A). A similar trend in the concentration of some PAHs was observed for 2019 samples.The highest values were retrieved for naphthalene and methylnaphthalene (5,497.9 and 10,112.0pg/g in H. dancoi, and 5,769.2 and 9,720.6 pg/g in H. scotti, respectively), followed by fluorene (6,333.2and 2,827.5 pg/g in H. dancoi and H. scotti, respectively) and pyrene (3,229.7 and in H. dancoi and 3,657.6 pg/g H. scotti, respectively).Sediment samples showed higher concentrations of PAHs than water, with values that were about 100 pg/g (Figure 4B). Trace metals Samples collected in 2018 showed a high concentration of As, ranging from 8.8 mg/kg in sediments to 16.3 and 16.0 mg/kg in H. scotti and H. dancoi, respectively.A higher amount of some metals was detected in H. dancoi than in both H. scotti and sediment samples, e.g., Ni (705 mg/kg), Zn (1,854 mg/kg) and Cd (375 mg/kg).Mo was detected in equal amounts in both sponge species, with 3.7 mg/kg (Figure 5A). In 2019, most heavy metals were found at higher concentrations in sponge tissues than in sediment samples, with few exceptions, i.e.Mn (120.3 mg/kg of sediment vs. 2.9 and 18.8 mg/kg in H. dancoi B and H. scotti C, respectively), Sr (94.4 mg/kg of sediment vs. 70.8 and 78.4 mg/kg in H. dancoi B and H. scotti C, respectively), Ba (34.4 mg/kg of sediment vs. 4.6 and 8.9 mg/kg in H. dancoi B and H. scotti C, respectively).A concentration of 22.7 and 17.9 mg/kg of As was detected in individuals of H. dancoi and H. scotti, respectively, while in sediments it was detected with a concentration of 3.3 mg/kg.The highest concentration of Cd (45.8 mg/kg) was retrieved in H. scotti, whilst in both H. dancoi and sediment samples the concentration of this metal was negligible (Figure 5B).Finally, Hg was particularly enriched in H. scotti (592 µg/kg) and, at a lesser extent, in H. dancoi (92.9 µg/kg) with respect to sediment showing a concentration of 10.3 µg/kg (Figure 5C). Prokaryotic community diversity and composition Data on total sequence reads, quality trimming, ASV information and diversity indices obtained for samples included in this study are reported in the Supplementary Table S3.Shannon and Simpson indices showed that the diversity level was similar in all sponge samples, while the Chao1 index was lower for the samples H. dancoi B1 and H. dancoi B3.The prokaryotic communities were mainly represented by bacteria, with overall percentages of 98%−99%.The only exception was H. dancoi B1, collected in 2018, as bacteria predominated with the 95.7% of total microbial community and the 3.6% was represented by Archaea.Overall, the taxonomic composition of bacterial assemblages at phylum level showed the predominance of Proteobacteria and Bacteroidota, followed by Firmicutes and Planctomycetota (Figure 6).The taxonomic structure of microbial communities is detailed in the following sections.The abundance of each group is expressed as relative abundance within the total bacterial/archaeal community. Haliclona dancoi The specimens of H. dancoi sampled in 2018, namely B1 and B3, showed predominance of Proteobacteria (abundance of 40.0 and 49.2%) and Bacteroidota (40.0 and 49.2%), respectively.Similarly, the specimen collected in the 2019 (namely 1Sp2a) showed Proteobacteria and Bacteroidota average relative abundances of 53.5 and 10.8%, respectively.Firmicutes occurred at higher abundance in specimens of H. dancoi B1 and B2, accounting for 25.1 and 27.5%, respectively, while an average abundance of 8.2% was detected in the specimen collected in 2019.Conversely, Planctomycetota were more represented in H. dancoi collected in 2018, with a relative abundance of 6.4% (Figure 6). Differences in the relative abundance of some taxonomic groups were observed between 2018 and 2019 samplings at the order level (Supplementary Table S4).For instance, Propionibacteriales were more abundant in specimens collected in 2018 (relative abundance of 38 and 39.1%, in B1 and B3, respectively) than 2019 (2% in 1Sp2a).Similarly, Bacteroidales were abundant in 2018 (11.8 and 6.8% in B1 and B3, respectively), but they were absent in 1Sp2a collected in 2019.Flabobacteriales accounted for 10.5 and 2.7% in 1Sp2a and both H. dancoi specimens collected in 2018.Conversely, Rickettsiales and Rhodobacterales were quite absent in the sponges collected in 2018 but resulted abundant (17.7 and 6.9%, respectively) in 1Sp2a collected in 2019.Burkholderiales and Enterobacterales showed similar abundances in samples collected in both years (with abundances ranging from 3.3 to 5.8% and from 12.1 to 13.3% in 2018 and 2019, respectively (Figure 7A). Haliclona scotti Proteobacteria and Bacteroidota dominated also the bacterial communities associated with H. scotti.Specifically, Proteobacteria sequences accounted between 60.9 and 64.3% in samples collected in 2018, while they ranged from 80.5 to 95.7% in association with sponge specimens collected in 2019.Bacteroidota occurred at higher percentages in samples collected in 2018 (range 20.1%−26.2%)than in 2019 (average relative abundance ranging from 0.5 to 12.3%; Figure 6).At order level, members of UBA10353 marine group and Pseudomonadales were the most abundant in all H. scotti specimens (Supplementary Table S4).The former accounted in the range 32.3%−39.6% in 2018, and in the range 28.9%−54.9% in 2019.Pseudomonadales showed relative abundances comprised between 13.8 and 17.3%, with higher abundance in individuals collected in 2019 (range 37.4%−43.1%)than 2018.Flavobacteriales were more abundant in individuals collected in 2018 (25.4%−26.8%),as well as Rickettsiales and SAR11 clade, even if they were characterized by relative abundance lower than 10% (Figure 7B). The Archaeal community of H. scotti was exclusively composed of Candidatus Nitrosopumilus members (Group 1a Nitrososphaerota, formerly Thaumarchaeota), with the exception of the specimen H. dancoi B1 that showed the sole presence of Methanothermobacter (among Euryarchaeota) sequences as archaeal representatives. In line with results obtained for the bacterial communities associated with the two sponge species, bacterial communities inhabiting sediment and seawater samples collected in 2019 were dominated by Proteobacteria and Bacteroidota.Proteobacteria accounted for 20.0 and 19.3% in the two sediment sample replicates, respectively, and for 69.3 and 61.2% in the two seawater replicates, respectively.Bacteroidota showed abundances of 19.1 and 18.7% in the two sediment sample replicates, respectively, and of 21.1 and 26.8% in the two seawater sample replicates, respectively.However, some taxa detected in sediment samples were absent or represented at lower abundances in sponge tissues, i.e., Campylobacterota (relative abundance 10.6 and 11.5% in SED19a and SED19b, respectively), Desulfobacterota (7.1 and 6.8% in SED19a and SED19b, respectively), Firmicutes (5.6 and 5.4% in SED19a and SED19b, respectively) and Planctomycetota (6.4 and 6.8% in SED19a and SED19b).Conversely, Cyanobacteria were retrieved quite exclusively in water samples (relative abundance 8.1% in WAT19b; Figure 9). archaeal taxa were detected in water samples from 2018.Conversely, archaeal communities were represented by Euryarchaeota and Halobacterota in water samples from 2019, with a relative abundance of 69.2 and 30.8% on the total archaeal community, respectively.Sediment archaeal communities were composed by Nanoarchaeota (relative abundance 78.6% of total archaeal community), Euryarchaeota (5.7%), and Crenarchaeota (2.8%) in sample from 2018.Nanoarchaeota and Euryarchaeota (accounting for 58.6 and 41.4% of the total archaeal community, respectively) predominated in sediment sampled in 2019. Abiotic vs. biotic matrices The statistical analysis performed by PCA (Figure 10) showed a separation of sediment, water and sponge samples depending on the taxonomic composition of the bacterial communities at phylum level.Sediment samples grouped together and showed a positive correlation with Desulfobacterota, Campylobacterota, Verrucomicrobiota and Acidobacteriota, while water samples resulted more related to Cyanobacteria and Planctomycetota.Finally, sponges were mainly related to Nitrospinota, Proteobacteria, Bdellovibrionota and unknown sequences.The two sponge species clustered separately according to species and year of collection.Indeed, a first cluster was composed of all sponges collected in 2018, with H. dancoi B1 and B3 bacterial communities more related to Firmicutes and Actinobacteriota, and H. scotti C1, C2, and C3 bacterial communities more related to Bacteroidota, Synergistota and Desufobacterota (this latter was exclusively present in H. scotti C2).A second large cluster was formed by all samples collected in 2019, even if the single specimen of H. dancoi 1Sp2a was separated from the three H. scotti individuals, probably due to the higher abundances of some taxonomic groups, such as Dependentiae, Plancomycetota and Campylobacterota. Pollutant level vs. prokaryotes Correlations between the relative abundances detected at phylum level and pollutant concentrations were calculated (Figure 11).A strong negative correlation was detected between phenanthrene concentration and Acidobacteriota abundance (p ≤ 0.001), while Actinobacteriota, Acidobacteriota, Planctomycetota, and Bacteroidota were negatively correlated with low chlorinated PCB congeners (p ≤ 0.05).Strong negative correlations were detected between Acidobacteriota and both Sb and Mo, as well as between Actinobacteriota and Sb, Mo and Tl.A positive correlation was generally observed between bacterial phyla and trace metal concentration.For instance, Cyanobacteria positively correlated with both Ba and Fe, while Verrucomicrobiota positively correlated with both V and U. Nitrospinota was the sole phylum showing a significative positive correlation with PCB (namely PCB170).No positive correlations were generally observed between bacteria and PAHs. Finally, a separation of samples between abiotic and biotic matrices was shown by the PCA (Figure 12).The two principal components explained the 87.5% of the total variance, with PC1 and PC2 accounting for 69.7 and 17.8% of the variance, respectively.The overlapping of the vectors is related to the parameters with a Pearson correlation factor >0.9.Specifically, a separate cluster included the two sponge samples collected in 2019, which were more correlated with the concentration of PCBs and PAHs.A bigger cluster was constituted by three subclusters: the first including the sponge samples collected in 2018, which were more correlated with the presence of both PCB105 and higher abundances of Lawsonella sequences; the second one including sediment samples, which were more correlated with the presence of both PCB151 and Ilumatobacter, Haloferula, and Roseibacillus sequences.Water samples clustered individually. Discussion Even in remote areas, such as Antarctica, biological communities are subjected to stressors of anthropogenic origin.These include pollution which may derive from tourism and research activities in Antarctica (i.e., occurring trough local discharges and emissions), as well as from the long-range transport via atmosphere of pollutants produced at lower latitudes.Porifera are among the first filter-feeding animals that have been recognized as suitable pollution sentinel for tracking trends in anthropogenic contamination in marine coastal waters (Batista et al., 2014).In fact, they have all the requirements needed to be considered bioindicator species, such as their high abundance and wide distribution, sessile and long life, and high tolerance to a variety of environmental factors (Krikech et al., 2022).In this study, two Antarctic sponge species within the genus Haliclona (namely H. scotti and H. dancoi) were characterized for the accumulation of POPs (PCBs and PAHs) and trace metals, with respect to their bulk environment.To the best of our knowledge, in contrast to H. dancoi and other non-described Haliclona spp., which was previously analyzed for biological and chemical traits and are widespread in Antarctica, H. scotti has never been analyzed for the associated microbial community nor for the pollutant content. Sediments are commonly considered as a valid environmental indicator for the monitoring of pollution of water and marine organisms due to the settling of suspended particulates on the bottom.However, in this study, PCBs and PAHs resulted highly enriched in both sponge species (both in 2018 and 2019) in comparison to sediment (and seawater) samples, suggesting that sponges might be used as pollution indicators even in Antarctica.A variable trend was observed in the case of trace metals, which were often more concentrated in sediment than in sponges or showed concentrations similar to those determined in at least one sponge species.Overall, the different bioaccumulation patterns of tested contaminants determined in H. scotti and H. dancoi contextually collected from the same sampling site were most likely associated to inter-specific differences in their morphological and physiological traits.Conversely, the differences encountered in pollutant concentrations determined in 2018 and 2019 might be dependent on environmental features that differently characterized both the inner and outer parts of the Thetys Bay at sampling time.These aspects need to be further investigated to individuate mechanisms involved in both pollutant accumulation and excretion processes carried by sponges, and in pollutant accumulation by sponges over time and space. FIGURE Principal component analysis computed on the entire dataset, including results from chemical and microbiological results (in terms of relative abundance of bacterial taxa at genus level).Vetter and Janussen (2005) first analyzed five species of Antarctic sponges (Kirkpatrickia variolosa, Artemisina apollinis, Phorbas glaberrima, Halichondria sp., and Leucetta antarctica) from the King George Island to determine the presence of halogenated compounds.PCBs were not detected, whereas lindane, p,p ′ -DDE and alpha-HCH, in decreasing order of abundance, were detected in traces.The bioaccumulation of PCBs, alongside other POPs (i.e., hexachlorobenzene, HCB, and dichlorodiphenyltrichloroethane, DDT) was recently reported by Pala et al. (2023) in 25 Antarctic sponge specimens collected from the Terra Nova Bay in 2005. Unfortunately, the authors did not analyze sediment or seawater for the determination of pollution level also in the environment surrounding the analyzed sponges.Among the seven sponge specimens collected from the Thetys Bay and analyzed by Pala et al. (2023), a unique specimen (namely TB6) belonged to H. dancoi.The authors reported a concentration pattern (i.e., PCB > DDT > HCB) which was common to all samples.Overall, the concentration of most PCB congeners in H. dancoi (pg/g of dry weight) determined in our study resulted generally lower than those previously reported by Pala et al. (2023) for the same species; range 70-530 pg/g.Exception was, for example, PCB105 (1,1 ′biphenyl, 2,3,3 ′ ,4,4 ′ -pentachloro-) that was below the detection level in Pala et al. (2023), but instead accounted for 23.3 pg/g in H. dancoi analyzed in our study.However, even if we recorded an increase in PCBs in the 2019 samples, compared to those of 2018 (possibly in dependence on the different sampling sites, i.e., the inner and outer bay), this result gives rise to hope that the banning of the production of PCBs in the 1970s has determined a decreasing trend in their transport and bioaccumulation at higher latitudes.According to Pala et al. (2023), the long-range atmospheric transport was the major driver for POP contamination in the investigated area.However, in a global change scenario several additional factors (e.g., wildlife amplification and icemelting), in addition to the ever-increasing human presence and activities in Antarctica (e.g., research, tourism and fishing), should be also taken into consideration in the future to individuate those factors that synergistically could stress benthic communities.PAHs occurring in the marine environment derive from anthropogenic sources.A number of studies have targeted sediment or marine species different from sponges (i.e., mollusks and fish) for the accumulation of PAHs (e.g., Negri et al., 2006;Curtosi et al., 2009;Palmer et al., 2022).To the best of our knowledge, the present study represents the first record on PAH accumulation in Antarctic sponges.PAH concentrations found in sponges were detectable; only anthracene and benzo[a]pyrene were below detection limits and total PAH concentrations ranged from 29.7 to 41.6 mg/g dry weight.All samples had a higher proportion of the lighter (2-to 3-ring) parent PAHs (>63%), with naphthalene and methylnaphthalene being detected in all samples and having the highest concentration in sponge samples.This high proportion of lighter PAHs is probably due to the sponge greater uptake directly from the filtered water and to the higher solubility of lighter PAHs.The complete absence of anthracene, along with the non-negligible presence of phenanthrene (as well as the higher concentration of pyrene compared to fluoranthene), for both sponges leads to the attribution of the origin of the aromatic hydrocarbons to pyrolytic sources and, therefore, to long-term transport radius.Total PAH (2-to 6-ring parent and alkylated) concentrations in the sediment samples collected were around 1-2 µg/kg dry weight. Heavy metals can reach the Antarctic marine biosphere through four main processes, i.e, long range atmospheric transport and deposition, weathering, biological transportation (i.e., seabird and penguin guanos), and anthropogenic activities (Webb et al., 2020).High concentrations of heavy metals were previously determined in several Antarctic benthic invertebrates, including sponges (Palmer et al., 2022).According to Illuminati et al. (2016), the accumulation of heavy metals (namely cadmium, lead and copper) was significantly lower in the spicules (even if they represent about 80% of the sponge mass) than in the corresponding organic fraction of the Antarctic sponges Sphaerotylus antarcticus, Kirkpatrikia coulmani and Haliclona sp.Tedania charcoti contained zinc and cadmium at remarkable amounts (5,100 and 15,000 mg/kg of dry weight, respectively; Capon et al., 1993).An unidentified sponge from the Antarctic Peninsula contained cadmium, zinc and copper at the a concentration of 3.7, 37, and 3.2 mg/kg of wet weight, respectively (de Moreno et al., 1997).Bargagli et al. (1996) reported cadmium up to 80 mg/kg in Rossella sp., Tedania, sp. and Axociella sp.Such concentrations were generally lower than those determined in our Haliclona samples, suggesting an enrichment of certain trace metals in the Thetys Bay area over the time.In our study, at least one Haliclona species generally contained amounts of certain heavy metals similar to that determined in sediment.This finding was in line with Negri et al. (2006), who observed comparable amounts of heavy metals in sediment and Homaxinella balfourensis, Mycale acerata and Sphaerotylus antarcticus from Mc Murdo.Negri et al. (2006) found that only cadmium accumulated to higher concentrations in sponge tissue than in sediments.In our study, in addition to cadmium, also arsenic (both sponge species), nickel and zinc (in H. dancoi only), and mercury (in H. scotti only) showed a similar trend. The sponge-associated bacterial community composition strongly differed from those retrieved in both sediment and seawater, suggesting that a sponge-specific bacterial communities could occur in the analyzed species.This finding was in line with previous observations (e.g., Moreno-Pino et al., 2020;Sacristán-Soriano et al., 2020).Furthermore, in accordance with main data available for other Antarctic sponges (e.g., Rodríguez-Marconi et al., 2015;Cárdenas et al., 2018Cárdenas et al., , 2019;;Steinert et al., 2019), both sponge species hosted bacterial communities dominated by Proteobacteria and Bacteroidota, followed by Firmicutes and Planctomycetota, at phylum level.For instance, a specimen of H. dancoi (namely THB8) collected in 2005 from the Thetys Bay hosted a bacterial community dominated by Alpha-and Gammaproteobacteria (29.6 and 40.1% of the total sequences, respectively), whereas Actinobacteriota (similarly to the present study) and Bacteroidota accounted for 4.9 and 4.4%, respectively (Papale et al., 2020).Conversely, no data are available on the prokaryotic community specifically associated with H. scotti (it was collected from the Thetys Bay more than 114 years after its original description), thus our data representing a baseline. Consistently with Sacristán-Soriano et al. (2020), archaeal sequences (rarely targeted within the Antarctic sponge-associated prokaryotic communities) were quite exclusively related to Thaumarchaeota in both sponge species.Exception was the specimen H. dancoi B1 that hosted only Euryarchaeota.Among Thaumarchaeota, the abundance of the Candidatus Nitrosopumilus confirmed previous observations by Moreno-Pino et al. (2020) on the Antarctic sponges Myxilla sp. and Leucetta antarctica, highlighting its probable role in ammonia oxidation within the host tissues.Finally, the archaeal communities in the targeted sponges were in sharp contrast with those retrieved in sediment and seawater samples, characterized by the overall predominance of Nanoarchaeota and Euryarchaeota, and the absence of Thaumarchaeota. Notably, despite their high similarities at phylum level, the bacterial communities of H. scotti and H. dancoi were differently structured at both order and genus level, suggesting the possible selection by the host species, in concomitance or not with environmental factors, of associated microbes, as it was previously observed (e.g., Cárdenas et al., 2019;Sacristán-Soriano et al., 2020).For instance, among main orders, Bacteroidales, Lactobacillales, Enterobacterales, Burkholderiales, and Bacillales were exclusively, or almost exclusively, associated with H. dancoi.The same was true for the UBA10353 marine group, previously reported in association with sponges (i.e., Georgieva et al., 2020;Laroche et al., 2021), that was particularly abundant only in H. scotti.Finally, Flavobacteriales and Pseudomonadales were significantly more abundant in H. scotti than H. dancoi, whereas Propionibacteriales showed an opposite trend.Genera showed a patched distribution among analyzed specimens.However, some of them resulted absent in one or the other species.For instance, Profundimonas and Bdellovibrio were hosted only by H. scotti (both in 2018 and 2019).Conversely, Bifidobacterium, Lactococcus, Prevotella and Porphyromonas occurred only in association with H. dancoi.A further insight into the diversity of sponge-associated prokaryotes, analyzing a higher number of sponge specimens sampled across time and space, is needed to elucidate the interactions between microbes and their Antarctic benthic hosts, by establishing if the observed exclusive phylotypes are actual members of the sponge core microbiomes of H. dancoi and H. scotti, and to individuate sponge intrinsic features driving the observed specific association. Microbes associated with sponges are thought to thrive with the occurrence of organic and inorganic pollutants in their host tissues, possibly protecting sponges by transforming pollutants or participating in their excretion, as it was supposed by Perez et al. (2003) for some PCB congeners in the Mediterranean Spongia officinalis.The potential of sponge-associated bacterial communities in degrading aromatic compounds was recently suggested by the application of predictive functional analyses on 16S rRNA gene data (Steinert et al., 2019;Moreno-Pino et al., 2020;Papale et al., 2020;Cristi et al., 2022).Mangano et al. (2014) observed that bacterial isolates from the Antarctic sponge Hemigellius pilosus showed resistance to cadmium, suggesting that this probably allowed them to gain residence in the host tissue.Later, the ability to tolerate high heavy metal concentration (i.e., mercury and cadmium), was demonstrated in exopolysaccharide-producing bacterial isolates from the Antarctic sponges Haliclonissa verrucosa, H. pilosus and T. charcoti (Caruso et al., 2018).It is not to be excluded that the different levels in the bioaccumulation of tested contaminants observed in H. scotti and H. dancoi from the same sampling site might drive by the bacterial communities associated with the sponge host, in dependence of their toxicity, as well as bacterial resistance and/or transformation.In our study, the statistical analyses, aimed at comparing microbiological and chemical data in sponge tissues, allowed us to suppose that pollutant levels in Antarctic sponges could be a sponge non-biological feature involved in the establishment of associated bacterial communities.For instance, at phylum level, the low abundance of Acidobacteriota and Actinobacteriota might be dependent on the concentration of certain pollutants (e.g., phenanthrene, low chlorinated PCBs and some trace metals, such as molybdenum, antimony and thallium) in the sponge tissues.Conversely, Nitrospinota was the sole phylum showing a significative positive correlation with PCB (namely PCB170).To date, the study of the effects of pollution on microbes in their natural environment has been very limited and mainly addressed to soils (especially agricultural fields) and sediments.A numerical reduction of Acidobacteria in the presence of phenanthrene was previously reported in soil microbial communities (Sipila et al., 2008;Ding et al., 2012).Furthermore, Festa et al. (2018) observed that Actinobacteria and Acidobacteria were significantly repressed in phenanthrene-amended microcosms.Acidobacteria, together with Proteobacteria and Firmicutes, are generally reported as bacterial phyla associated with PCB contaminated sediments (Zenteno-Rojas et al., 2020).Among Proteobacteria, the ammonia-oxidizing Nitrosococcales (Gammaproteobacteria) positively correlated with both PCBs and PAHs, whereas the congener PCB105 (i.e., 1,1 ′ -biphenyl, 2,3,3 ′ ,4,4 ′ -pentachloro-) correlated to the bacterial orders Defluvicoccales (within Alphaproteobacteria) and Coxiellales (within Gammaproteobacteria).Among Actinobacteria, PCB105 positively correlated also to the order Micromonosporales. In addition, the PCA showed a strong relationship between PCB105 and the genus Lawsonella from both sponges.Further investigations should be addressed to the isolation of members of these genera to be tested for the degradation of selected PCB congeners.Unlike our results, the exposure to PCB congeners and Aroclor 1242 resulted in the selection of bacterial groups belonging to potential PCB degraders, i.e., Betaproteobacteria and Acidobacteria, with a decrease of toxicity with increased chlorine substitution (Correa et al., 2010;Nuzzo et al., 2017).Differently from PCBs and PAHs, trace metals seemed to favor the occurrence of Cyanobacteria and Verrucomicrobiota, suggesting their tolerance to metals occurring in the sponge mesohyl tissues.Overall, results from our study did not allow to discern specific patterns linking the exposure of sponge-associated bacterial communities to pollutants and the different bacterial community structures.Experiments in microcosm with Antarctic sponges exposed to pollutants (individual or combined) could be performed to disentangle bacterial community dynamics over time. Concluding remarks This study allowed obtaining important information on the bioaccumulation of a selection of persistent organic pollutants (i.e., PCBs and PAHs) and trace metals, along with the composition of the associated prokaryotic communities, in the Antarctic sponge species H. scotti and H. dancoi.In particular, we report for the first time on microbiological and chemical features of H. scotti, representing a rare species in the Ross Sea.The accumulation of the targeted inorganic and organic contaminants by the two sponge species appeared evident, as it was demonstrated by their lower concentrations in abiotic matrices (i.e., sediment and seawater), which surrounded sponge individuals at sampling time.Overall, in comparison with previous investigations, we observed an increased concentration of trace metals and, conversely, a decrease in the level of PCBs in the sponge tissue.Moreover, the analysis of PAHs in Antarctic sponges reported for the first time in this study.From a microbiological point of view, our finding confirmed previous observations on the predominance of Proteobacteria and Bacteroidota, as well as the low abundance of Archaea, within the prokaryotic communities associated with Antarctic sponges, with some bacterial traits (mainly at order and genus levels) resulting sponge-species specific. Results obtained in this study represent a baseline for further investigations aimed at disentangling the interactions between prokaryotes and Porifera in the Antarctic environment.The obtained outcomes suggest to take into consideration in future research anthropogenic stress factors in addition to biological features.This is the case of pollution level, that directly or indirectly could increase in polar areas following ice-melting with consequent release of contaminants entrapped for a long time within glaciers, affecting the biota.Further studies carried out under controlled conditions and targeting selected pollutants and bacterial taxa are certainly needed to elucidate both the pollutant bioaccumulation rate by Antarctic sponges and the actual effect of contamination in structuring the sponge-associated prokaryotic communities.Finally, testing bacterial isolates for POP degradation capability and efficiency, in addition to metal tolerance, could furnish further information on the adaptation of bacteria to the sponge environment and their role in the protection of their host. FIGURE FIGUREConcentrations of PCB congeners determined in seawater (expressed in pg/L), sediment and sponge samples (both expressed in pg/g) from the Thetys Bay (Terra Nova Bay, Antarctica) during austral summer (A) and (B) .Only values above pg/g or pg/L are shown.Please note the di erent scale. FIGURE FIGURE PAH concentrations retrieved in seawater (expressed in pg/L), sediment and sponge samples (both expressed in pg/g) from the Thetys Bay (Terra Nova Bay, Antarctica) during austral summer (A) and (B) .Only values above pg/g or pg/L are shown.Please note the di erent scale. FIGURE FIGURE Trace metal concentrations (expressed in ppm) in sediment and sponge samples collected at Thetys Bay (Terra Nova Bay, Antarctica) during austral summer (A) and (B) .Hg concentrations (ppb) in all samples are shown in (C).Please note the di erent scale and units. FIGURE FIGUREBacterial community composition at phylum level in sponge specimens collected from the Thetys Bay. FIGURE FIGURETaxonomic composition at order level of the bacterial communities associated with (A) Haliclona dancoi and (B) Haliclona scotti specimens collected from the Tethys Bay (Antarctica). FIGURE FIGUREHeatmap showing the taxonomic composition of sponge-associated bacterial communities at genus level. FIGURE FIGURETaxonomic composition of the bacterial communities in sediment and seawater samples collected from the Tethys Bay (Antarctica). FIGURE FIGUREPrincipal component analysis computed on bacterial community composition in sediment and seawater samples collected from the Tethys Bay (Antarctica). FIGURE FIGURE Correlation matrix between the relative abundance of each bacterial phylum and chemical pollutants: (A) PCBs; (B) PAHs; (C) trace metals.
2024-02-11T16:14:33.334Z
2024-02-09T00:00:00.000
{ "year": 2024, "sha1": "789b62da4f524695cb2cf66a8f4ad11b91683744", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/microbiology/articles/10.3389/fmicb.2024.1341641/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c139f10f333aee7ae1e77c8220fa32c283622b71", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
255957112
pes2o/s2orc
v3-fos-license
CDK4/6 inhibitors sensitize gammaherpesvirus-infected tumor cells to T-cell killing by enhancing expression of immune surface molecules The two oncogenic human gammaherpesviruses, Kaposi sarcoma-associated herpesvirus (KSHV) and Epstein–Barr virus (EBV), both downregulate immune surface molecules, such as MHC-I, ICAM-1, and B7-2, enabling them to evade T-cell and natural killer cell immunity. Both also either encode for human cyclin homologues or promote cellular cyclin activity, and this has been shown to be important for proliferation and survival of gammaherpesvirus-induced tumors. CDK4/6 inhibitors, which are approved for certain breast cancers, have been shown to enhance expression of MHC-I in cell lines and murine models of breast cancer, and this was attributed to activation of interferons by endogenous retrovirus elements. However, it was not known if this would occur in gammaherpesvirus-induced tumors in which interferons are already activated. Multiple KSHV/EBV-infected cell lines were treated with CDK4/6 inhibitors. The growth of viable cells and expression of surface markers was assessed. T cell activation stimulated by the treated cells was assayed by a T-cell activation bioassay. Both viral and host gene expression was surveyed using RT-qPCR. Three CDK4/6 inhibitors, abemaciclib, palbociclib, and ribociclib, inhibited cell growth in KSHV-induced primary effusion lymphoma (PEL) and EBV positive Burkitt’s lymphoma (BL) cell lines, and KSHV-infected human umbilical vein endothelial cells (HUVECs). Moreover, CDK4/6 inhibitors increased mRNA and surface expression of MHC-I in all three and prevented downregulation of MHC-I surface expression during lytic replication in KSHV-infected cells. CDK4/6 inhibitors also variably increased mRNA and surface expression of ICAM-1 and B7-2 in the tested lines. Abemaciclib also significantly enhanced T-cell activation induced by treated PEL and BL cells. Certain gammaherpesvirus genes as well as endogenous retrovirus (ERV) 3–1 genes were enhanced by CDK4/6 inhibitors in most PEL and BL lines and this enhancement was associated with expression of gamma interferon-induced genes including MHC-I. These observations provide evidence that CDK4/6 inhibitors can induce expression of surface immune markers MHC-I, B7-2, and ICAM-1 in gammaherpesvirus-infected cell lines and induce virus-specific immunity. They can thus thwart virus-induced immune evasion. These effects, along with their direct effects on KSHV- or EBV-induced tumors, provide a rational for the clinical testing of these drugs in these tumors. Both viruses utilize multiple strategies to evade the human immune system. One strategy is to inhibit the expression of immune surface molecules, such as major histocompatibility antigen class I (MHC-I), intracellular adhesion molecule 1 (ICAM-1), and B7-2, also called CD86. Two KSHV genes, K3 and K5, serve as E3 ubiquitin ligases and promote degradation of various cell immune surface molecules including MHC-I, ICAM-1, and B7-2 [4][5][6][7]. Also, KSHV-encoded latency-associated nuclear antigen (LANA) can inhibit MHC-I expression [8]. Multiple EBV genes including BNLF2a, BGLF5, BILF1, BCRF1, downregulate MHC-I expression by directly interfering with the HLA-I antigen presentation pathway [9][10][11][12]. EBV encoded viral interleukin-10 (vIL-10) inhibits not only MHC-I, but also ICAM-1 and B7 expression on monocytes [13]. Downregulation of MHC-I by these KSHV and EBV-encoded genes impairs antigen presentation to CD8+ T cells. Moreover, diminished ICAM-1 and B7-2 expression enables evasion of both T cell and natural killer (NK) cell immunity. This downregulation occurs in tumors caused by KSHV or EBV and makes the tumors relatively invisible to the immune system. These findings suggest that restoration of these surface immune markers could potentially enhance immune recognition and elimination of virus-infected tumor cells. Cell cycle dysregulation is a hallmark of gammaherpesviruses-mediated oncogenesis, and cyclins are important for the survival of gammaherpesvirus-induced tumors [14]. Cyclin D2 has been shown to be crucial to the survival of KSHV-infected PELs [15]. KSHV LANA cooperates with the vCyclin-CDK6 kinase complex to facilitate latency and enhance cell proliferation [16,17]. Cyclin D1 overexpression is required for EBV to stably infect nasopharyngeal epithelial cells [18]. EBV latent membrane protein-1 (LMP-1)-induced expression of cyclin D2 contributes to B cell transformation and uncontrolled cell proliferation [19]. All these observations underscore the role of cyclin D-CDK4/6 as a crucial regulator for gammaherpesvirus-mediated oncogenesis and further suggest that cyclin D-CDK4/6 could potentially serve as an immunomodulatory target for KSHV-or EBV-related tumors. CDK4/6 inhibitors were developed to block cyclin D-CDK4/6 complex formation, preventing phosphorylation of Rb1 and resulting in cell cycle arrest from G1 to S phase [20]. Three inhibitors are now approved by the US Food and Drug Administration (FDA) for the treatment of breast cancer: abemaciclib (Abe), palbociclib (Pal), and ribociclib (Rib) [21]. A recent study showed that PEL cell lines require cyclin D2 expression and are highly sensitive to treatment with Pal in vitro [15]. Several studies have revealed that in addition to blocking the tumor cell cycle, CDK4/6 inhibitors can upregulate MHC-I expression in colon cancer cells [22][23][24]. Evidence was presented that this effect was mediated through enhanced expression of ERV3-1, leading to an interferon type III response that includes MHC-I upregulation. If CDK4/6 inhibitors could upregulate surface immune markers, this could potentially facilitate their activity against KSHV or EBV-induced tumors. However, we were concerned that such an effect may not be seen, since these tumors are chronically infected with gammaherpesviruses. To explore the potential utility of CDK4/6 inhibitors in gammaherpesvirus-induced tumors, we investigated the effect of the 3 approved CDK4/6 inhibitors, Abe, Pal, and Rib on the proliferation of PEL cells, of KSHVinfected human umbilical endothelial cells (HUVEC), and of EBV-infected BL cells. In addition, we investigated their potential to reverse the downregulation of MHC-I, ICAM-1, and B7-2 in these tumor cells and the potential role of drug-induced changes in gammaherpesvirus and ERV3-1 expression on these effects. can thus thwart virus-induced immune evasion. These effects, along with their direct effects on KSHV-or EBV-induced tumors, provide a rational for the clinical testing of these drugs in these tumors. Cells and cell culture BJAB cells, PEL cell lines JSC-1, BCBL-1, BC-1, and BC-2; and EBV cell lines Akata, Raji, and Daudi, were obtained and maintained as described previously [25,26]. JSC-1, BC-1, and BC-2 are co-infected with EBV, while BCBL-1 is not. The KSHV-infected iSLK cell line (BAC16 strain) was kindly provided by Rolf Renne from the University of Florida and maintained in DMEM (Gibco) supplemented with 10% fetal bovine serum (HyClone), 1.0 μg/ mL puromycin, 1.2 mg/mL hygromycin, and 250 μg/mL G418. HUVECs were purchased from Lonza and maintained in EGM-2 Bulletkit (Lonza) for up to 5 passages, with passages 3 to 5 used for experiments. All cells were maintained at 37 ℃ and 5% CO 2 in Falcon cell culture flasks. For the treatment of cells with inhibitors, floating cells were seeded at 2 × 10 5 /mL and treated with CDK4/6 inhibitors at indicated concentrations for up to 3 days, while adherent cells were treated for up to 7 days. Test compounds Abemaciclib (LY2835219) and palbociclib (PD-0332991) were purchased from Selleck Chemicals and dissolved in ethanol and water respectively at a stock concentration of 10 mM/mL. Ribociclib (LEE011, 10 mM/mL in DMSO) was purchased from MedChemExpress. All stocks were aliquoted and stored at − 20 °C. Assay for viable cells The relative number of viable cells was analyzed using CellTiter-Glo ® Luminescent Cell Viability Assay kit (Promega). Briefly, 50 μL reagent was added to 50 μL cells in 96-well plate. Contents were mixed for 2 min on an orbital shaker and then incubated at room temperature for 10 min. Luminescence was recorded using a plate reader VICTOR ™ X3 (PerkinElmer). The percentage of alive vs dead cells was assessed using trypan blue staining. Flow cytometry analysis and antibodies Analysis of cells for surface marker expression was carried out as described previously [25]. T-cell activation assay T-cell activation assays were performed using the T-cell activation bioassay IL-2 promoter kit (Promega, cat# J1651) as described previously [27]. Briefly, PEL cells were treated with the indicated concentrations of CDK4/6 inhibitors for 3 days, after which the cells were washed with PBS and mixed with T cell receptor (TCR)/ CD3 Effector Cells (Jurkat T-cells expressing a luciferase reporter gene under IL-2 promoter) at a 2:1 ratio for BCBL-1 to Jurkat, and 1: 2 for Akata to Jurkat, and stimulated with various concentrations of anti-human CD3 monoclonal antibody (OKT3) from ThermoFisher Scientific (cat# 16-0037-81) in a 37 °C incubator for 6 h. The mix and incubation were done in triplicate in 96 wellplates containing 25 μL of target cells, 25 μL of Jurkat cells, and 25 μL of anti-CD3 antibody per well. Bio-Glo reagent was then added for a 10 min incubation at room temperature. The signal was captured using a Victor X3 multilabel plate reader (PerkinElmer). Wells containing cells but no Bio-Glo reagent served as background luminescence control. Luminescence data was plotted as a 4PL regression graph using GraphPad Prism software. Fold change in activation was calculated after subtracting signal obtained from Jurkat cells without co-stimulation by target cells from that obtained with co-stimulation by target cells. RT-qPCR mRNA was extracted using the RNeasy kit (Qiagen). cDNA synthesis was performed using random primers with the High-Capacity cDNA Reverse Transcription Kit (ThermoFisher Scientific) on a T100 Thermal Cycler , and EBV-encoded EBER2 and BMRF1, SYBR green qPCR assays were performed using the Applied Biosystems ® SYBR ® Green PCR Master Mix (Ther-moFisher Scientific) on an ABI StepOnePlus real-time PCR system (ThermoFisher Scientific). Primers used are listed in Additional file 8. Relative mRNA expression levels were analyzed using the ΔΔCt method with genes coding for β-actin as the reference gene. Human IL-29/IL-28B DuoSet ELISA Cell culture media was collected into 1.5 mL Eppendorf tubes and purified by centrifugation at 15,000g for 10 min. Supernatants were collected and analyzed by ELISA for IL28B and IL29 production using Human IL-29/IL-28B DuoSet ELISA kit (R&D, Cat # DY1598B-05) following the manufacturer's instruction. Briefly, 100 μL/well sample or standards were added and incubated for 2 h. After 3 washes, 100 μL/well detection antibody were added and incubated for 2 h. After 3 washes, 100 μL/well Streptavidin-HRP was added to each well, and incubated avoiding light for 20 min at room temperature. Following another 3 washes, 100 μL/well Substrate Solution was added and incubated for 20 min at room temperature without direct light. Finally, 50 μL/well Stop Solution was added, and optical density was measured using a microplate reader set to 450 nm. CDK4/6 inhibitors inhibit growth of KSHVand EBV-infected cells We first investigated the impact of CDK4/6 inhibitors on cell growth of various KSHV-infected cells including the PEL cell lines JSC-1 (Fig. 1a), BCBL-1 (Fig. 1b), BC-1 (Additional file 1a), and BC-2 (Additional file 1b), EBV-infected BL lines including Akata (Fig. 1c), Raji (Fig. 1d), and Daudi (Additional file 1c), and two EBVnegative BL lines including BJAB (Additional file 1d) and CA46 (Additional file 1e). All three CDK4/6 inhibitors exhibited dose-dependent inhibitory effect on cell growth for all the tested cell lines at day 3 (72 h) with about 30 to 70% inhibition observed at the highest doses tested (1 µM of Abe and Pal, or 5 µM Rib). We also tested the effects of Abe on the growth of uninfected HUVEC and KSHV-infected HUVEC cells. To produce KSHV-infected HUVEC, cells were exposed to concentrated viral stocks of KSHV.BAC16 using 15 viral DNA copy numbers per cell to reach around 85% GFP-positive rate by 24 h (Additional file 2). GFP is constitutively expressed in KSHV.BAC16-infected cells and GFP expression can be used to identify infected cells. Abe inhibited cell growth of KSHVinfected HUVEC starting at day 4 at concentrations of 0.02 µM or higher; by day 7, cell growth was inhibited about 50% at the highest dose tested (0.5 µM). In addition, Abe similarly inhibited the growth of uninfected HUVEC. The decrease in viable cell numbers after culture with CDK4/6 inhibitors was caused by a decrease in proliferation rather than an increase in cell death. After 3 days (B cells) or 4 days (uninfected or infected HUVEC cells) culture with Abe at doses that reduced cell numbers, there was no decrease in the percentage of viable cells as assessed by trypan blue exclusion (Additional file 3). CDK4/6 inhibitors increase MHC-I surface expression in KSHV-and EBV-infected cells We next explored the effect of CDK4/6 inhibitors on the surface expression of MHC-I on KSHV-and EBVinfected cells. Several KSHV-encoded lytic genes including K3 and K5 can ubiquitinate MHC-I and reduce its surface expression. Consistent with this, MHC-I expression decreased to 37.5% of control for JSC-1 ( Fig. 2a and g) and 31% for BCBL-1 ( Fig. 2b and g) after lytic induction by butyrate. However, pretreatment of these cells with 1 μM Abe substantially prevented downregulation of MHC-I by butyrate (Fig. 2a, b, and g). Even without lytic induction, treatment of JSC-1 and BCBL-1 PEL lines with 1 µM Abe increased surface MHC-I expression (Fig. 2a, b). Treatment of two other PEL cell lines, BC-1 and BC-2, with 1 µM Abe, exhibited similar upregulation of MHC-I on the cell surface (Additional file 4, a-d). Abe also increased surface expression of MHC-I in the EBV-infected Akata and Raji cell lines (Fig. 2c, d). Figure 2g shows the mean and standard deviation from 3 experiments of the expression of MHC-I on JSC-1 and BCBL-1 PEL lines and on Akata and Raji EBV-infected BL lines. We further tested the effects of Pal and Rib, in addition to Abe, on MHC-I expression on JSC-1 and BCBL-1 PEL lines and on Akata and Raji EBV-infected BL lines. As can be seen in Fig. 2g, all three CDK4/6 inhibitors increased MHC-I expression on PEL lines induced to lytic replication as well as uninduced lines. Moreover, all three drugs enhanced MHC-I expression on Akata and Raji BL lines. EBV-uninfected BJAB a Burkitt's lymphoma B cell line, also exhibited an increased MHC-I when treated with 1 µM Abe, although it was less than that of the virus-infected lines(Additional file 4e, f ). We also tested Abe's effect on MHC-I expression in KSHV-infected and uninfected HUVEC (Fig. 2e, f, and h). Cells were infected with KSHV.BAC16 to obtain about 85% cells expressing GFP. As seen in Fig. 2h, while both 0.1 and 0.5 µM Abe induced significant and dose-dependent increases in MHC-I on KSHV-infected HUVEC, only 0.5 µM Abe induced a small increase of MHC-I in uninfected HUVEC. CDK4/6 inhibitors increase ICAM-1, B7-2 and PD-L1 surface expression in KSHV-and EBV-infected cells KSHV and EBV also downregulate surface expression of ICAM-1 and B7-2 [6,13], which are important for T-cell and NK-cell activation and effector function. We assessed the effects of CDK4/6 inhibitors on surface expression of ICAM-1 and B7-2, as well as PD-L1 on virus-uninfected BJAB cells and the same set of virusinfected PEL and BL cell lines ( Fig. 3; Additional file 5). BJAB, PEL cells and EBV-infected BL cells were treated with 1 μM Abe, 1 μM Pal, or 5 μM Rib for 3 days, while uninfected and KSHV-infected HUVEC were treated with 0.5 μM Abe, 0.5 μM Pal, or 2.5 μM Rib for 4 days. Cells were then analyzed using flow cytometry. All the tested cell lines exhibited significant increases in ICAM-1 and B7-2 surface expression in response to all 3 CDK4/6 inhibitors although the virus-infected lines showed bigger increases compared to the virus-negative BJAB line. The average fold increase for ICAM-1 in KSHV and EBV-infected cells ranged from 2.5 to 4.5 fold (Fig. 3a), and for B7-2 from 3.2 to 6.0 fold (Fig. 3b). In addition, all 3 CDK4/6 inhibitors increased expression of PD-L1 from 4.2 to 8.3 fold (Fig. 3c) in the virusinfected lines. Virus-uninfected BJAB cells exposed to the drugs had a substantially smaller increase in the surface markers. We also tested the effects of the drugs in uninfected and infected HUVEC cells. While there was a relatively small increase (1.5 for ICAM-1, 2.0 for B7-2 and PD-L1) in uninfected HUVEC, there was a more substantial increase (2.7 to 4.2 for ICAM-1, 25.4 to 5.8 for B7-2, and 5.2 to 7.2 for PD-L1) in expression of all 3 markers in KSHV-infected HUVEC (Fig. 3d-f, Additional file 6). and PD-L1 in KSHV-and EBV-infected cells We further evaluated the impact of CDK4/6 inhibitors on the expression of mRNA for these surface markers in JSC-1 (Fig. 4a), BCBL-1 (Fig. 4b), Akata (Fig. 4c), and Raji (Fig. 4d) cells. Cells were cultivated with 1 μM Abe or solvent control (diluted ethanol in medium) for 24 h or 48 h and the total RNA was then extracted for qPCR analysis. As the result shows, mRNA expression of all these 4 genes were significantly increased at both time points after treatment, with an average fold increase of 1.6-1.8 for MHC-I, 1.4-1.5 for ICAM-1, CDK4/6 inhibitor-treatment of PEL and EBV-infected BL cells enhances T-cell activation by these lines We next sought to assess whether these CDK4/6 inhibitor-treated cells would enhance T-cell activation through increased expression of co-stimulatory molecules. As in previous experiments, 3-days of treatment of BCBL-1 PEL cells and Akata BL cells by Abe upregulated surface expression of ICAM-1 and B7-2 (Fig. 5a, b). T-cell activation induced by Abe was assessed in aliquots of the same cell culture using Jurkat cells expressing a luciferase reporter gene under control of the IL-2 promoter as the effector cells, and anti-CD3 antibody was used to activate these cells. Both BCBL-1 and Akata cells cultured in the absence of CDK4/6 inhibitors increased Jurkat T-cell activation above the baseline (Fig. 5c, e). However, Abe-treated BCBL-1 and Akata cells further stimulated the Jurkat T-cell activation compared to the control untreated cells in a dose-dependent manner (Fig. 5c, e). When co-stimulated with 0.6 μg/mL anti-CD3 antibody (red arrow in Fig. 5c, d), the mean increases in T-cell activations induced by 0.3 μM Abe-treated BCBL-1 cells and 1 μM Abe-treated Akata cells were 2.6 fold and fivefold over control-treated cells, respectively. CDK4/6 inhibitors induce increased expression of KSHV and EBV genes, as well as ERV-3 Abe has been shown to suppress expression of DNA methyltransferases which in turn leads to increased expression of the endogenous retrovirus ERV3-1, and it has been suggested that this may be a mechanism for the increased expression of MHC-I [23,28]. In particular, it has been suggested that the ERV3-1 stimulates interferon type III which in turn leads to increased expression of interferon-induced genes including MHC-I. We were interested to see if such a mechanism might apply in the cell lines studied here, which were infected with an exogenous virus and might thus have high basal levels of interferon expression that would not be increased by ERV3-1. We were also interested to explore the possibility that Abe-induced expression of KSHV and/or EBV genes might contribute to the effect [29][30][31]. DNMT1, which suppresses expression of endogenous retroviruses and has been reported to suppress of certain gammaherpesvirus genes [32], was substantially downregulated in both JSC-1 and BCBL-1 PEL cells and in Raji EBV-infected BL cells after treatment with 1 µM Abe; there was also a trend towards Fig. 6 Effect of CDK4/6 inhibitors on viral and interferon-related cellular genes. KSHV genes (a), EBV genes (b), and cellular genes related to interferon pathway (IFN alpha, IFNbeta, IFNgamma, ERV3-1, DNMT1, DDX58, IFNL2, and selected interferon-stimulated genes) (c) in cells treated either with ethanol control (0 μM Abe) or 1 μM Abe for 24 h. Total RNA was extracted from whole cell lysates and expression of specific genes was assayed by RT-qPCR. mRNA levels were normalized to β-Actin and compared to those in control cells. Shown are average fold changes of mRNA levels in Abe-treated cells relative to ethanol-treated control cells from 3 independent experiments. Error bars represent standard deviations from 3 independent experiments. Statistically significant differences (*p ≤ 0.05, **p ≤ 0.01, ns not significant, paired 2-tailed t-test) between control and Abe-treated cells are indicated a decrease in Akata cells although the change was not significant (Fig. 6c). Expression of the endogenous retroviral gene ERV3-1, which is suppressed by DNMT1, was significantly upregulated in all the tested lines except Raji (Fig. 6c). In addition, the dsRNA sensor DDX58, which senses endogenous retroviruses and stimulates IFN signaling, was also upregulated in all four cell lines (Fig. 6c). We next looked specifically at expression of interferons. We found that IFNL2, IFNα, and IFN-β, but not IFN-γ mRNA expression was increased in three of the lines tested (JSC-1, BCBL-1, and Akata), but not in Raji cells (Fig. 6c). Moreover, using an ELISA kit that measures IL28B plus IL29, two other members of the interferon family, we found an increase in the supernatants of JSC-1 and BCBL-1 cells after treatment with 1 μM Abe for 3 days (Additional file 7). To evaluate the downstream effects of enhanced interferon activity, we measured the IFN-sensitive transcription factors STAT1 and NLRC5, as well as two IFN-stimulated genes, IFIT1 and OAS2 in these cells. All these 4 genes exhibited significantly enhanced expression in JSC-1, BCBL-1, and Akata cells after exposure to Abe (Fig. 6c). However, Raji was again somewhat of an outlier in that IFIT1 was not significantly changed and NLRC5 was decreased (Fig. 6c). Discussion Several studies have shown that in addition to a direct effect on cancer cells, CDK4/6 inhibitors can enhance expression of surface MHC-I on certain tumors, thus making the cells more visible to the immune system [9][10][11][12][22][23][24]. In this report, we extend these findings by showing that pharmacological inhibition of CDK4/6 can enhance expression of MHC-I in cells infected by two oncogenic herpesviruses, KSHV and/or EBV. Moreover, we show that these inhibitors also upregulate expression of ICAM-1 and B7-2 in the infected tumor cells, which can enable NK killing and enhance sensitization of T-cells to the tumor cells. We further found that the increased surface expression induced by Abe was at least in part due to increased mRNA expression of these genes. Finally, we demonstrate that Abe enhances T cell activation induced by PEL cell lines. A schematic figure outlining the proposed mechanism for the upregulation of surface immune markers by CDK4/6 inhibitors in EBV and/or KSHV-infected cells and subsequent activation of T cells and NK cells is presented in Fig. 7. It has been shown that the KSHV vCyclin/CDK6 complex is constitutively activated in KSHV-infected cells [33]. Also, PEL cells are highly dependent on cyclin D2, which is required for cell cycle G1/S transition by complex with CDK4 or CDK6 [15]. With regard to EBV, the viral-encoded protein LMP-1 induces the expression of cyclin D2 to promote uncontrolled cell proliferation in EBV-positive Burkitt's lymphoma cell lines [19]. These data suggest that CDK4/6 inhibitors may suppress growth of KSHV-and/or EBV-induced tumors, and Manzano et al. have shown that the CDK4/6 inhibitor Pal can lead to a striking G1 arrest in BCBL-1 and BC-3 PEL cells [15]. Our results extend this observation and show that Pal and two other CDK4/6 inhibitors can also suppress growth of EBV-infected tumor cells as well as KSHV-infected HUVEC cells. Interestingly, there was no significant difference in the degree of growth inhibition observed between EBV-infected Burkitt's cells and the control EBV-uninfected Burkitt B cells ( Fig. 1 and Fig. S1), suggesting that this growth inhibitory effect of CDK4/6 inhibitors did not require viral components. Virus-induced tumors are potentially quite susceptible to immunologic control since they express virally encoded foreign proteins. However, oncogenic viruses have evolved potent mechanisms to suppress expression of surface immune markers, thus enabling infected cells and the virus-induced tumors to evade detection by the immune system [4][5][6][7][8]13]. Approaches to reverse this downregulation might thus be important means of controlling these tumors. Recently, several studies indicated that in addition to inhibiting cell proliferation, CDK4/6 inhibitors may upregulate genes encoding MHC-I and the antigen presentation pathway in breast or colon tumors [23], or alter the tumor microenvironment by suppression of regulatory T cell proliferation [23,34,35], or enhanced activation of tumor-infiltrating T cells [22,24]. These studies also provided evidence that the upregulation of MHC-I by CDK4/6 inhibitors was the result of degradation of DNMT1, leading to activation of the endogenous retroviruses (ERVs) and then to activation of interferons and interferon-induced genes [23]. Since the chronic gammaherpesvirus infection of KSHV-and EBV-induced tumors may already provide stimulation of interferon, we wondered whether a similar upregulation of MHC-I would be seen with CDK4/6 inhibitors. In fact, we found a robust upregulation of MHC-I in all the KSHV-and EBV-associated tumors, as well as KSHVinfected HUVEC cells. In addition, we found that these drugs substantially upregulated ICAM-1 and B7-2. The effect seen here of ICAM-1 and B7-2 upregulation by CDK4/6 inhibitors is noteworthy. ICAM-1 and B7-2 are important co-factors for both T cell and NK cell killing [25,[36][37][38]. MHC-I expression is important for T cell killing, while MHC-I downregulation will generally activate NK cell killing in the face of ICAM-1 and B7-2 expression. However, downregulation of all three surface proteins, as occurs in gammaherpesvirus-infected cells, may enable escape of both T cell and NK cell killing. By upregulating both ICAM-1 and B7-2, along with MHC-I, CDK4/6 inhibitors thus render the tumor cells susceptible both to T cell killing and to certain types of NK cell killing. This conclusion is bolstered by the observation of enhanced T cell activity seen here with CDK4/6 treatment. It should be noted that our results also show that CDK4/6 inhibitors can enhance expression of PD-L1, which might suppress the immune response induced by expression of other immune surface markers. Antibodies against PD-1 or PD-L1 have recently been shown to reverse immunologic suppression mediated by PD-1/ PD-L1 and have potent activity against certain tumors that express new epitopes [39]. The observed increase of PD-L1 in gammaherpervirus-induced cells induced by Abe suggests that CDK4/6 inhibitors may be most effective immunologically if administered with anti-PD-1 or anti-PD-L1 therapy. In this regard, recent mouse tumor studies have shown that such therapy can augment CDK4/6-induced tumor control [22][23][24]. Previous studies of the effect of CDK4/6 inhibitors on MHC-I have provided evidence that the upregulation may be the result of DNMT1 degradation, leading to activation of endogenous retroviruses and subsequent interferon and interferon-induced gene activation [23,40]. We wondered if a similar mechanism may apply in cells infected with gammaherpesviruses. We found that Abe does inhibit DNMT-1 and that this inhibition was associated with activation of the endogenous retrovirus ERV3-1, dsRNA sensor RIG-I, IFN-α, IFNβ, type III interferon IFN-λ2 (IL28A), IFN-sensitive transcription factors including STAT1 and NLRC5, and interferon-stimulated genes like IFIT1 and OAS2. NLRC5 is transcriptionally activated by STAT1, which can be induced by IFNα [41]. Since NLRC5 transactivates MHC-I [42], the elevated expression of STAT1and with Fig. 7 Schematic of proposed mechanism for CDK4/6 inhibitors' effects on surface immune molecules in KSHV + cells and EBV + cells. In addition to direct inhibition of tumor cell proliferation, CDK4/6 inhibitors downregulate DNA methyltransferase 1, which activates both ERVs and certain KSHV/ EBV genes. The DNA and RNA viral elements stimulate IFNs, which activate transcription factors including STAT1 and NLRC5. These genes in turn transactivate the expression of ISGs and immune surface molecules including MHC-I, ICAM-1, B7-2, and PD-L1. These surface molecules enable killing of the tumor by binding to receptors on T cells and potentially NK cells too. Because expression of PD-L1 is also enhanced, the results suggest that it may be worth testing CDK4/6 inhibitors with anti-PD1/L1 therapy NLRC5 contributes to MHC-I overexpression. STAT1 also upregulates ICAM-1 and PD-L1 [43,44]. Although studies have shown that IFN type I upregulates B7-2 [45,46], whether the upregulation of B7-2 is through the same mechanism is still an unsolved puzzle. Interestingly, we also observed an increased expression of both latent and lytic KSHV and EBV viral gene mRNA expression, although the elevation observed (which ranged from 1.3-fold to 3.4-fold) varied among cell lines and was relatively small. It is unclear at this time whether ERV activation, activation of gammaherpesvirus genes, or both may contribute to the upregulation of surface markers in KSHV-or EBV-infected cells, and additional studies will be needed to further clarify the mechanism for these effects. Our laboratory has previously shown that the immunomodulatory drug pomalidomide (Pom) also upregulates the immune surface molecules including MHC-I, ICAM-1, and B7-2 in a range of KSHV infected PEL and EBV-infected BL cells [25][26][27]. Pom has been shown to be clinically effective against KS and is in fact now approved for this indication [47], but it remains unclear if Pom upregulates these molecules in endothelial cells. Also, the combination of Pom plus pembrolizumab, an anti-PD-1 antibody, has been shown to be active in some patients with refractory EBV + lymphoma. As seen here, CDK4/6 inhibitors upregulate these molecules not only in KSHV + or EBV + lymphatic cells, but also in KSHV-infected endothelial cells. This indicates that CDK4/6 inhibitors might be worth testing for possible activity against KS. To explore this possibility, our group has initiated a clinical trial to test Abe in patients with KS (NCT04941274). Also, there is recent evidence that virus-induced tumors can be sensitive to anti-PD1 or anti-PD-L1 therapy, probably because they express foreign (viral encoded) proteins and because viruses often upregulate PD-L1 [48,49]. Given that CDK4/6 inhibitors also upregulate PD-L1, it may be worth exploring the use of these drugs with anti-PD-1/PD-L1 therapy against virus-induced tumors in the future, although any benefits would have to be weighed against the potential for enhanced toxicity. Conclusion In summary, CDK4/6 inhibitors are shown here to inhibit proliferation of PEL, KSHV-infected endothelial cells, and EBV+ Burkitt's lymphoma cells and also to reverse virus-induced suppression of MHC-I, ICAM-1, and B7-2 on these cells. Treated cells were sensitized to T-cell killing probably due to the enhanced expression of these surface markers including ICAM-1 and B7-2. Gammaherpesviruses have evolved a variety of mechanisms to downregulate expression of these surface markers, thus rendering KSHV-and EBV-infected tumors relatively invisible to the immune system. By reversing this effect, CDK4/6 inhibitors may promote the immunologic control of gammaherpesvirus-induced tumors in addition to their direct effects on tumor cell proliferation.
2023-01-18T14:15:28.377Z
2022-05-13T00:00:00.000
{ "year": 2022, "sha1": "64b8850e82dd45081ee0dd1b23c625bb505b4c28", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12967-022-03400-z", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "64b8850e82dd45081ee0dd1b23c625bb505b4c28", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
10174157
pes2o/s2orc
v3-fos-license
Influence of different post-interventional maintenance concepts on periodontal outcomes: an evaluation of three systematic reviews Background To selectively review the existing literature on post-interventional maintenance protocols in patients with periodontal disease receiving either non-surgical or surgical periodontal treatment. Methods Three systematic reviews with different periodontal interventions, i.e. scaling and root planing (SRP), SRP with adjunctive antibiotics or regenerative periodontal surgery were evaluated focusing on their post-interventional maintenance care. Due to the early publication of one review an additional literature search update was undertaken. The search was executed for studies published from January 2001 till March 2015 through an electronic database to ensure the inclusion of resent studies on SRP. Two reviewers guided the study selection and assessed the validity of the three reviews found. Results Within the group of scaling and root planing alone there have been nine studies with more than three appointments for maintenance care and five studies with more than two appointments in the first 2 months after the intervention. Chlorhexidine was the most frequently used antiseptic agent used for 2 weeks after non-surgical intervention. Scaling and root planing with adjunctive antibiotics showed a similar number of visits with professional biofilm debridement, whereas regenerative studies displayed more studies with more than three visits in the intervention group. In addition, the use of antiseptics was longer and lasted 4 to 8 weeks after the regenerative intervention. The latter studies also showed more stringent maintenance protocols. Conclusions With increased interventional effort there was a greater tendency to increase frequency and duration of the maintenance care program and antiseptic agents. Background Colonization by a pathogenic biofilm is recognized as the primary etiologic factor for the initiation and progression of periodontitis [1]. Despite the fact that host and environmental factors may significantly contribute to the resulting inflammatory process [2], it has been convincingly shown that professional supra-and subgingival biofilm control is able to control disease initiation and progression [3]. The effective control and management of the supra-and subgingival biofilm is traditionally performed by mechanical means, such as hand instruments and/or ultrasonic debridement [4]. Further methods include air-polishing devices with various inserts and powders, the latter preferably being effective in removing biofilms but low-abrasive dental hard tissues [5,6]. Thorough non-surgical scaling and root planing (SRP) was demonstrated as an important part of successful periodontal treatment, especially on deeper periodontal pockets [7]. The results of such treatment may only be maintained in the long-term when an effective supragingival plaque control is performed, and a regular supportive periodontal treatment (SPT) is applied [1,8]. In addition, a body of evidence shows the benefit of systemically administered antibiotics as an adjunctive to SRP, particularly in patients with aggressive periodontitis and in those with advanced chronic disease [9,10]. However, in distinct clinical situations with local defects, e.g. in teeth with furcation involvement or in single-rooted teeth with vertical bone defects, residual increased probing pocket depth (PPD) might persist after non-surgical therapy and require further treatment, e.g. surgical interventions, in order to prevent ongoing loss of attachment and tooth loss [11,12]. Different studies have analyzed the effects of supervised maintenance care after periodontal therapy eg. subgingival scaling and root planing or surgical intervention. Such maintenance programs included the adjunctive use of antiseptic rinsing followed by professional supragingival cleanings [13,14]. These supervised maintenance care recommendations are mostly given after elaborate, regenerative periodontal surgery. However, there is no comparative study or systematic review available, which evaluates the influence of different approaches on clinical outcomes. Therefore, the purpose of this study was to assess post-interventional maintenance protocols in terms of frequency and adjunctive antiseptic infection control for three different treatment modalities concerning infectious periodontal conditions: after non-surgical periodontal therapy with and without systemic antibiotics and regenerative surgical interventions. The following specific questions were addressed as follows: 1. In a patient population with chronic periodontal disease or periodontal disease with infrabony defects, who underwent different periodontal interventions, which frequency of post interventional maintenance was applied? 2. Is there a difference in pocket depth reduction among the same groups of periodontal therapy with different recall maintenance protocols? Protocol The present article merged and screened three already existing systematic reviews, that assessed three different treatment options: SRP [15], systemic antibiotics (amoxicillin and metronidazole) as adjunctive to SRP [16] and regenerative periodontal surgery [17]. All three reviews covered different periodontal therapeutic procedures. The intention of this article was not to compare these primary therapeutic concepts but to expose the measures that were taken after each of these therapies, to put a specific light on the post-interventional protocols and to elaborate any potential differences between the different therapeutic approaches. All studies within the reviews did not show any overlap in the articles the authors have chosen. Two [16,17] of the three reviews were fitted to match the current PRISMA (Referred Reporting Items for Systematic Review and Meta-analyses) criteria for reviews [18]. An older systematic review did not follow up to date protocol recommendations [15] due to its earlier publishing date. To assure up-to-dateness and to avoid missing current articles a new literature search was undertaken as described below: Eligibility criteria for additional search Following parameters of the publications needed to be presented in order to be eligible for inclusion: (1) The articles needed to be randomized controlled trials (RCT) or controlled clinical trials on periodontal treatment with a follow-up of at least 12 months or more, written in English. (2) Patients with chronic periodontitis by the age of at least 20 years. (3) A recording of maintenance care plan of at least 2 months post intervention. Outcome measures The main focus of this study was to filter out different maintenance strategies after any periodontal intervention such as frequency of appointments after SRP and periodontal surgery. In addition changes in periodontal probing depth (PPD) were extracted as primary parameter outcome for meta-analysis. Secondary parameter outcome such as recession (REC), clinical attachment level (CAL) or plaque index (PI) were not part of this meta-analysis due to the non-homogenous data presentation in the single studies. Since the data of probing depth at the requested time points were missing in the non-surgical interventions, only guided tissue regeneration (GTR) studies, which adequately reported on this parameter could be included in the forest-plot. Additional analysis and information sources Due to one study's early publishing date, the literature search was updated and the electronic databases MEDLINE and Cochrane (Oral Health Group Specialist Trials Register) were consulted again for studies published from January 2001 to March 2015, while the search strategy was re-formulated based on the suggested three complexes "non-surgical therapy" AND "surgical therapy" AND types of studies. Two independent reviewers (ID and PRS) screened for additional titles written in English and searched for possible inclusion criteria, which would match this study's review protocol. The following modified MeSH terms were used according to the original publication [15]: "periodontics" OR "periodontal disease" -Intervention: "non-surgical therapy" OR "surgical therapy" OR "dental scaling" OR "root planing" OR "dentalprophylaxis" OR "initial therapy" OR "debridement" OR "nonsurgical" OR "non-surgical" OR "periodo*" OR "gingivectomy" OR "periodontal pocket surgery" OR "surgical flaps" OR "modified Widman flap" OR "access" OR "Kirkland" OR "osseous surgery" OR "apically repositioned" OR "coronally" -Study design: "longitudinal studies" OR "comparative study" OR "clinical trial" Influence of maintenance on therapy In order to assess the influence of different maintenance protocols, the probing depth reduction served as a clinical outcome. The data on mean and standard deviation of probing depth reduction were extracted from each of the included studies for meta-analysis. Because of differences in the observation period across studies, only those studies were pooled that had somewhat similar follow-up frequencies. Due to a large amount of heterogeneity between studies (I 2 > 50 %), a random effects model was necessary for pooling. All analyses were performed with R [19]. The studies were arranged in the following categories: protocol 1 (two or less recall visits within the 2 months) and protocol 2 (three or more visits within the 2 months after periodontal intervention). The duration of the use of antiseptic agents were categorized into A, B and C. CHX/A accounts for the use of antiseptics lasting for 2 weeks of rinsing after periodontal intervention. CHX/B required rinsing of 4 weeks and CHX/C of up to 8 weeks after periodontal intervention. For example protocol 2 and CHX/C displays the most vigorous maintenance protocol compared to protocol 1 and CHX/A being the least vigorous post interventional maintenance care (Fig. 2). Quality assessment Studies within the three systematic reviews were methodologically screened by two reviewers in order to assess the quality and a potential risk of bias [15][16][17]. Summary of measures Throughout the three systematic reviews there have been a variety of different maintenance protocols. All studies, which utilized an antiseptic agent, included chlorhexidine (CHX) into their maintenance program, whereas the concentration and duration varied among the studies. If mentioned, all studies provided supragingival cleanings, in two cases oral hygiene instructions and motivation was given. The frequency of follow-up intervals throughout the different reviews was heterogenic. Evaluation of the maintenance programs The following aspects of the post-interventional maintenance protocols were analyzed. The recall frequency including mechanical re-instrumentation and/or remotivation in the first 2 months after the intervention was arbitrarily identified, whether or not antiseptic rinsing was utilized (active ingredient, concentration, frequency and duration). The results were organized in a subgroup analysis assessing the change of pocket depth reduction. Subgroups were defined as following: (I) The recall frequency during the first 2 months with 1 = two or less visits (≤2) and 2 = three or more visits within the 2 months (≥3) and (II) the duration of adjunctive use of antiseptics for (A) two or less than 2 weeks, (B) Up to 4 weeks and (C) more than 5 weeks. According to this classification the lowest level of maintenance strategy was therefore 1A and the highest-level 2C. Based on this classification system design, further subgroup combinations were possible. Study selection In total, three reviews were identified by the electronical database search. Since the date of the publication by Heitz-Mayfield and co-workers dating back to 2002 was not up-to-date, an additional investigation was initiated. The latter revealed another 697 publications. After the independent screening procedure by two of the authors (I.D. und P.R.S.), eight studies were included for the full-text analysis. Finally, one additional study met the inclusion criteria and was entered in Table 1 for analysis [20] (Fig. 1). Description of study maintenance protocol The analysis of the maintenance protocols included reviews with three different periodontal approaches. Taking into account the heterogenic study designs of 78 studies in total depending on the treatment, most of the studies listed in the reviews followed a specific maintenance protocol after treatment. A detailed overview of the different maintenance protocols is given in Tables 1, 2, and 3. This analysis's keynote was type and concentration of antiseptic formula as well as instruction given concerning intake frequency and duration. In addition, the type of maintenance and intervals after treatment were defined. With regard to the antiseptic rinsing, all studies used chlorhexidine (CHX) in concentrations ranging from 0.06 to 0.2 %. The latter and therefore highest concentration was used in roughly 50 % of the studies. Patients were advised to rinse twice daily in most articles, whereas the individual antiseptic rinsing period varied significantly. The minimum concentrations and time for adjunctive chemical plaque control was found in the SRP with adjunctive use of antibiotics group of one study. There, patients were advised to use 0.06 % antiseptic chlorhexidine once daily for 8 days [21][22][23] ( Table 2). The most extensive antiseptic regimen was revealed in studies with regenerative treatment. The minimum rinsing concentrations of 0.12 % chlorhexidine and a rinsing period of up to 11 weeks going up to a concentration of 0.2 % and a rinsing duration of 10 weeks could be observed [24,25] ( Table 3). The recall duration and frequency including professional plaque control also varied throughout all the included studies as listed in the reviews. Table 4 compares the antiseptic duration and recall frequency 2 months after treatment within the different treatment groups. The non-surgical approach group chose to administer the higher chlorhexidine concentration with Table 4 are the five studies within the scaling and root planing group who merely appeared 3 months after the intervention. Whether or not patients were advised to rinse with antiseptic agents is not mention in the studies (Table 1). Seventeen out of 34 studies in the group with SRP and the use of adjunctive antibiotics listed in the review by Zandbergen and co-workers (2013) prescribed chlorhexidine mouth rinse. In this review with 34 studies in total 20 studies performed a postinterventional maintenance protocol of less than two visits. Two studies performed more than three visits within the past 2 months following the procedure. The regenerative approach showed 27 out of 30 studies with adjunctive use of antiseptic agents and 26 studies included a maintenance protocol within the 2 months. 23 studies scheduled their patients more than three times in the 2 months following the regenerative intervention. Reason for this strict maintenance protocol as described by the authors was to remove sutures after surgery, polishing for plaque control due to its tooth brushing abstention at the site of surgery and finally regular supragingival cleanings [17]. Overall, the more sophisticated a treatment intervention was the greater the tendency to increase maintenance frequency, mouth-rinse concentration and duration. Influence of different maintenance programs on probing depth reduction A graphical representation of the results is given in a forest plot (Fig. 2). However, to start with, it is important to highlight the nature of this analysis. It is purely based on the data of the regenerative procedure presented by the systematic review from Graziani and co-workers (2012). There have been complications along the way to extract the needed data from the other reviews. Therefore, these pooled results cannot be statistically analyzed, directly compared and interpreted. Nevertheless, the regenerative studies with more recall interventions and longer duration of antiseptic agents after surgery displayed a greater PPD reduction compared to the studies with lower protocol interventions and antiseptic agents i.e. longer rinsing periods and/or more recall visits looking at the pooled data. The highest mean difference was observed in protocol 2/CHX = C with a mean probing pocket depth difference of 3.7 mm. In general, there has been an increase of the observed effect with increasing baseline PPD. Protocol 1/CHX = C and protocol 2/CHX = Cboth groups having four studies with a comparable range of baseline probing pocket depths (PPD)the mean differences greatly differed and accounted for 2.81 mm versus 3.70 mm, respectively. Both groups show different probing depth reductions with different protocols and rinsing durations due to its type of surgical intervention and patient care needs. Again the data presented and its evaluation was extracted from the forest plot done only on the studies on regenerative therapy. Quality assessment Graziani et al. and Zandbergen et al. demonstrated a quality assessment to estimate the risk of bias. Nine articles were using adequate methods of study design, unclear methods were used in 21 articles and inadequate methods in eight articles [17]. Out of 28 studies, 15 studies demonstrated low potential risk of bias. The remaining studies showed moderate to high risk of bias [16]. Heitz-Mayfield et al. justified missing quality assessment with a limited number of studies [15]. Discussion Based on the premises of peer-reviewed papers, this study approach was to pool the evidence and extract the data regarding the maintenance care intervals and procedures. The aim of this article was to put a specific light on the post-interventional protocols and to elaborate any potential differences between the different therapeutic approaches, but not to compare the actual outcomes due to obvious reasons. The performed summary must not be understood as an inadmissible comparison of the clinical results from different treatment approaches. Nevertheless, the latter appraised differences of post-interventional maintenance programs and their impact on periodontal healing was done for the surgical interventions in regard to pocket depth reduction. As an interesting main finding, the studies analyzed in the three reviews, showed different post-interventional plaque control strategies with regard to chemical and mechanical plaque control regimens among the different treatment groups. For instance, 1 to 2 weeks reflects a reasonable time span after surgical therapy until sutures are removed. Other time points were adjusted to 1 and 2 months. The regenerative surgical approach showed the highest degree of maintenance efforts after the intervention. An explanation for the continuous monitoring is the nature of regenerative therapy since this therapy is invasive and expensive. An explanation for the continuous monitoring is the nature of regenerative therapy since this therapy is invasive and expensive. Nevertheless, a prospective clinical study on patient undergoing one-stage full-mouth scaling and root planing has demonstrated a statistical significant benefit in probing depth and clinical attachment gain after 3 months of extensive use of CHX mouth rinse. [26]. Due to the fact that all studies included in this study were part of peer-reviewed reviews, the evaluation of outcome measurements such as PPD and CAL was not being weighed against each other, only the mean PPD difference on the regenerative studies presented in the forest plot. However, one systematic review did not meet the current standard requirements for systematic reviews due to an earlier publishing date. Hence, a new search was undertaken to compensate for the lack of upto-dateness. In addition, the classification to evaluate the maintenance protocol was arbitrarily set, which might be considered a shortcoming of the present study. However, it reflects potentially relevant time frames in the course of periodontal therapy. Postoperative success is determined by many factors, such as anatomical and technical factors, patient compliance, plaque control and cigarette smoking. All these are factors that can directly affect the predictability of periodontal regeneration [27]. Thus, low plaque scores have shown to reduce the risk of membrane exposure, infection and guarantee better complication management [25,28]. These factors inevitably also lead to more stringent protocols, which is mainly justified by infection control and healing optimization. Common procedure such as the intake of adjunctive antibiotics or antiinflammatory medication during regeneration could also be one factor for a favorable outcome. The importance of postoperative plaque control in determining the outcome of periodontal surgery is well established and recognized in the literature for a long time [29]. In contrast, studies using systemic antibiotics as an adjunct to SRP disclosed an opposite tendency. Almost Non-surgical therapy with systemic antibiotics is considered to be a more cost-effective treatment alternative in contrast to sophisticated regenerative surgery. Its aim is to reduce the need for any surgical therapy [30,31]. In addition, less postoperative complications may be expected due to the fact that neither surgery has been performed nor foreign materials have been implanted. Quite to the contrary, patients were only under the guard of antibiotics. Overall, two different periodontal procedures with their specific therapy goals, extend of treatment site as well as different healing needs make it challenging to compare and evaluate the results. However, plaque scores after 3 months were quite high in some studies and reached a plaque index of above 30 % at reevaluation [32][33][34]. Some studies did not even report on plaque indices, which made a more detailed assessment of this important data impossible. Therefore, it remains unclear to what extent decreased plaque levels would have led to a better clinical outcome. In contrast, evidence suggests that the occurrence of re-established plaque may lead to recolonization, less healing and persistence of the original pathology [35,36]. Missing quantitative data on probing depth reductions at the requested time points also made it impossible to assess and compare the results of the non-surgical interventions and include the data in a forest-plot. Therefore, only adequately reported GTR studies could be included into this analysis. Periodontal sites, which could be influenced, were suprabony and infrabony defects as well as pockets with furcation involvement. Inarguably, the role and potential of adequate plaque control during therapy and afterwards have an impact on the subgingival microbiota [37]. The importance of an adequate maintenance protocol for the success or failure in periodontal therapy has therefore been introduced as an achievable goal for decades [38]. Duration of antiseptic use CHX: A = two or less than 2 weeks; B = up to 4 weeks; C = more than 5 weeks. Number of recall visits following periodontal treatment within the first 2 months; Protocol: 1 = two or less visits; 2 = three or more recall appointments within the first 2 months Conclusion By tendency, regenerative studies showed a longer duration of antiseptic mouth rinse and higher quantity of maintenance protocol compared to non-surgical approaches. However, sophisticated treatment should not be a causal reason for vigorous recall intervals more an evidence based reason. Till today there is little evidence on how elaborate a post treatment or postoperative protocol should be in order to benefit the patient. Carefully executed prospective studies on this topic are still warranted. Abbreviations CAL, clinical attachment level; GTR, guided tissue regeneration; PI, plaque Index; PPD, probing pocket depth; PRISMA, referred reporting items for systematic review and meta-analyses; RCT, randomized clinical control trail; REC, recession; SPT, supportive periodontal ttreatment; SRP, scaling and root planning
2018-04-03T02:26:41.076Z
2016-07-18T00:00:00.000
{ "year": 2016, "sha1": "0d809b5637bac073820517c851b234f330b4ef5a", "oa_license": "CCBY", "oa_url": "https://bmcoralhealth.biomedcentral.com/track/pdf/10.1186/s12903-016-0244-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0d809b5637bac073820517c851b234f330b4ef5a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
271400510
pes2o/s2orc
v3-fos-license
Uncertainty in tuberculosis clinical decision-making: An umbrella review with systematic methods and thematic analysis Tuberculosis is a major infectious disease worldwide, but currently available diagnostics have suboptimal accuracy, particularly in patients unable to expectorate, and are often unavailable at the point-of-care in resource-limited settings. Test/treatment decision are, therefore, often made on clinical grounds. We hypothesized that contextual factors beyond disease probability may influence clinical decisions about when to test and when to treat for tuberculosis. This umbrella review aimed to identify such factors, and to develop a framework for uncertainty in tuberculosis clinical decision-making. Systematic reviews were searched in seven databases (MEDLINE, CINAHL Complete, Embase, Scopus, Cochrane, PROSPERO, Epistemonikos) using predetermined search criteria. Findings were classified as barriers and facilitators for testing or treatment decisions, and thematically analysed based on a multi-level model of uncertainty in health care. We included 27 reviews. Study designs and primary aims were heterogeneous, with seven meta-analyses and three qualitative evidence syntheses. Facilitators for decisions to test included providers’ advanced professional qualification and confidence in tests results, availability of automated diagnostics with quick turnaround times. Common barriers for requesting a diagnostic test included: poor provider tuberculosis knowledge, fear of acquiring tuberculosis through respiratory sampling, scarcity of healthcare resources, and complexity of specimen collection. Facilitators for empiric treatment included patients’ young age, severe sickness, and test inaccessibility. Main barriers to treatment included communication obstacles, providers’ high confidence in negative test results (irrespective of negative predictive value). Multiple sources of uncertainty were identified at the patient, provider, diagnostic test, and healthcare system levels. Complex determinants of uncertainty influenced decision-making. This could result in delayed or missed diagnosis and treatment opportunities. It is important to understand the variability associated with patient-provider clinical encounters and healthcare settings, clinicians’ attitudes, and experiences, as well as diagnostic test characteristics, to improve clinical practices, and allow an impactful introduction of novel diagnostics. Introduction Tuberculosis (TB) is a major infectious cause of morbidity and mortality globally.In 2022, 7.5 million people were diagnosed with TB, and 1.3 million people died because of the disease [1].Missed or delayed TB diagnosis and treatment and low quality of care remain critical obstacles to disease control and improving health outcomes [2,3]. To minimize diagnostic and treatment delays, high quality TB services should include access to rapid, affordable and accurate tests, such as the molecular WHO-recommended rapid diagnostics (mWRD) [4].However, mWRD are seldom available at the point-of-care in resource-limited settings.Despite massive efforts to coordinate the global roll-out of GeneXpert (Cepheid, USA), recent data still show that this test is unavailable in many peripheral settings and more generally the underutilization of modern TB diagnostic technologies [5,6]. The underutilization of diagnostics may arise due to a variety of factors, including as a consequence of providers' know-do gap [7].This may become particularly evident in situations where care is tailored around the patient's perceived needs (e.g., clinicians offering a more affordable but less accurate diagnostic test), and best practices are not implemented (e.g., clinicians choosing quick symptom relief with low-cost pharmaceuticals over diagnostic certainty) [7].Moreover, in resource-limited settings, when a patient presents with signs and symptoms suggestive of TB, clinicians may decide to start treatment based solely on clinical grounds, regardless of test availability [8,9]. To standardize decision-making, pre-and post-test disease probabilities have been used to determine the thresholds for testing and treatment decisions [10,11].The provider determines a pre-test probability of disease, which varies depending on clinical signs and symptoms as well as the provider's experience, knowledge, and health care setting.The provider then decides whether to move forward with testing or initiating treatment.Following testing, the provider determines the post-test probability of disease and decides whether to start or withhold TB therapy [11]. There have also been multiple attempts to develop scoring systems and clinical prediction models for TB screening and diagnosis [12][13][14][15][16]. Scoring systems can help to calculate the probability of TB disease in a reproducible way and might be particularly helpful in paediatric TB, where currently available diagnostic tests lack high sensitivity.Additionally, clinical algorithms might help determine when testing is helpful and when a negative test is insufficient to withhold treatment [17]. However, in reality, the decision to test or treat presumptive TB cases can be affected by contextual variables beyond accessibility to diagnostics, or a mere computation of disease probability [18].Provider characteristics, including their ability to cope with complexity, risk, and uncertainty, contribute to process variability [19].Uncertainty is an inevitable component of clinical practice and can occur throughout the decision-making process: when formulating clinical hypotheses, identifying a diagnosis, choosing a test and interpreting its result, and interpreting patient preferences [20].Multilevel models of uncertainty emphasize the dynamic interplay between different sources and types of uncertainty at each level, and may be useful to classify the challenges of clinical decision-making [20]. Understanding uncertainty in the TB decision-making process and the reasons why a provider would initiate empiric treatment or would not utilize a microbiological test even when available, is important to develop diagnostic tools that improve TB diagnosis and care behaviours and practices, and to project the impact of the introduction of novel diagnostic aids [21].This umbrella review of systematic reviews (SR) aimed to identify factors influencing providers' decisions to test for TB, and initiate TB treatment in adult and paediatric patients with presumptive TB in high-TB and TB/HIV burden countries [22]. Study design rationale and methodology An initial scoping search was conducted on MEDLINE (via OVID) for terms related to "tuberculosis" and "decision-making", and identified several reviews relevant to our research question [23][24][25].Since most records evaluated either qualitative or quantitative primary studies, and often reported complementary findings, we chose an umbrella review design to allow for the inclusion of these reviews with a broad scope of inquiry and to achieve a higher level of synthesis [26][27][28]. The study was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [29].The Joanna Briggs Institute (JBI) guidelines for umbrella reviews [28,30], and the Cochrane guidance for overviews of reviews [31] were also followed to address the specific issues arising when conducting umbrella reviews.The methodology of this review was prespecified in a protocol [32]. Search strategy Using a combination of key terms to maximize sensitivity, seven electronic databases were searched: MEDLINE (via OVID), CINAHL Complete, Embase, Scopus, Cochrane Central, the PROSPERO register, and Epistemonikos database.The search was limited from January 2007 (considering that the development Xpert MTB/RIF was completed in 2009) to the date of search, which was the 4th of July 2022.The search was rerun on the 21st of July 2023.We developed a comprehensive list of keywords and synonyms for each broad domain: 1) TB, 2) clinical decision-making.Terms were searched individually first and then combined using Boolean operators.The search was piloted in MEDLINE and repeated in all databases.Where applicable, MeSH and free text terms were combined to identify relevant studies.The search strategy was developed with the support of a librarian at the LSHTM. Details on the search strategy are presented in S1 Appendix.Articles in English, French, Spanish, Portuguese, or Italian were considered.A search of the grey literature was not conducted. Selection and appraisal of records Records were selected on predefined inclusion and exclusion criteria guided by the Population, Intervention, Comparison, Outcome and Study design/setting (PICOS) framework (S1 Table) [33].Inclusion criteria consisted of population (individuals with presumptive pulmonary TB and health care providers involved in TB diagnosis and treatment), findings/outcomes (any relevant to clinical decision-making), and setting (high TB burden countries).We considered relevant to the decision-making any intervention, action, or event that influenced the diagnosis of TB.SRs, meta-analyses, and SRs of qualitative studies (hereinafter referred to as qualitative evidence syntheses) were included.Articles exclusively on drug-resistant TB, non-review articles, and reviews that did not use systematic methods were excluded (S1 Table ). Following removal of duplicates, the title/abstract screening was carried out by a single reviewer (FWB).The full text of selected records was then examined for inclusion in the study, based on the predefined criteria (S1 Table ). Quality appraisal Methodological quality, risk of bias and reporting quality of reviews were assessed using the JBI checklist for SRs [28,30].No records were excluded on grounds of quality due to lack of consensus on the most appropriate tools and approaches for managing low-quality reviews in umbrella reviews [34] (S2 Table ).Where available, GRADE assessments [35,36] were extracted and reported. Overlap assessment Several approaches have been proposed for overlap management in umbrella reviews [37].We included all eligible reviews and documented the extent of overlap in primary studies using the Corrected Covered Area (CCA) index [37].After obtaining the overall CCA, pairwise indexes were calculated (S1 Fig) .For reviews with moderate to high pairwise CCA, research aims and reported outcomes were examined.If two reviews had the same aims, findings from the highest quality review were described [37]. Data extraction Study characteristics and data of interest for included records were extracted by a single reviewer (FWB) [30,31,38].These included: type of review, title, authors, publication year, number of studies and participants included in the review, aims/objectives/PICO question (or equivalent), search strategy, methodological quality/risk of bias and certainty of evidence assessment.For reviews examining global data, only findings pertinent to high-burden TB settings were extracted.Data extraction also indicated where pooled analyses included non-high TB burden countries. Primary studies from reviews were not retrieved. Data synthesis Data synthesis used a systematic narrative approach for umbrella reviews [38], which involved thematic content analysis and coding of findings from each review to identify recurring themes associated with factors influencing TB clinical decision-making.Nvivo (version 1.5, 2021, QSR International Pvt.LTD, Australia) was used to iteratively code extracted key data.Themes were developed separately for quantitative and qualitative studies, then combined and presented complementarily [39,40].Barriers and facilitators for TB testing or treatment decisions from each review were coded first, and then grouped under common themes associated with decision-making uncertainty, based on the taxonomy developed by Eachempati et al. [20].The taxonomy develops around macro (society and community), meso (group relationships), and micro (individual) levels of uncertainty to emphasize the dynamic interplay between different sources and types of uncertainty at each level, and may be useful to classify challenges in health care decision-making [20]. Recurring themes were further classified based on an adapted version of the WHO conceptual framework representing the TB diagnosis and care continuum [41].The framework helped to identify four levels (patient, provider, health system, diagnostic test) of factors influencing TB clinical decision-making, including three time-points (patient-provider encounter, diagnosis, treatment initiation) for decision-making.The framework captures both the determinants (i.e., what causes decisional uncertainty) and broader sources (i.e., what contributes to the variability of decisional outcomes) of uncertainty in the decision-making process. Definitions Presumptive pulmonary TB was defined as clinical/pre-test suspicion or post-test suspicion despite a negative test.Diagnostic delay was defined as the time lag from first access to health system/consultation with provider to diagnosis; treatment delay was defined as the time lag from diagnosis to treatment initiation.Provider/health system delay was used to refer to any diagnostic or treatment delay attributable to provider or health systems factors (to differentiate from causes of delay attributable to patient factors). Review characteristics Database searches yielded 8542 records.After duplicate removal, 7345 unique records were screened by title/abstract.After full text screening of 110 records, a total of 27 reviews were included (Table 1).The PRISMA flow chart detailing the phases of study selection is presented in Fig 1 .Articles were published between 2008 and 2023.Records included nine meta-analyses, three qualitative evidence syntheses, and 15 mixed-methods narrative synthesis (Table 1).Primary studies included in the reviews spanned from 1970 to 2021 and were mostly observational (Table 1). Reviews varied in inclusion criteria, outcomes, settings, and population.Based on their primary aim, reviews were classified into four main categories: diagnostic and treatment delays (n = 8); knowledge, attitudes, and practices of TB healthcare providers and end users (n = 5); barriers and facilitators to utilization of TB diagnostic services (n = 10); diagnostic test impact on diagnosis and treatment (n = 4).Most reviews included primary studies with adult populations or did not include sub-group analysis by age.One review focused specifically on children and adolescents [25].Key population and outcome definitions were generally consistent.Prior to the review, standardized definitions were developed allowing for direct comparison and a narrative synthesis of findings (Table 1). Most reviews were of fair or good methodological quality.The main areas compromising methodological quality and confidence in findings were publication bias, not using consistent methods to minimize errors in data extraction, and not grading the quality of evidence (S2 Table ).Based on the global CCA index, most reviews had very low to no overlap.Seven pairs had high or very high primary source overlap.The citations list from one review [42] was not available, hence not included in CCA calculations (S1 Fig). Main findings from thematic content analysis Through iterative thematic analysis, 15 recurring themes were identified.Applying an integrated multilevel model of uncertainty in health care [20], the themes were classified by type of uncertainty (Table 2).Findings were then classified as barriers or facilitators for testing and treatment decisions (Fig 2). Synthesis enabled the development of a framework for uncertainty in TB decision-making, presented in Fig 3 .Types of uncertainty were grouped in four macro-levels, corresponding to sources of uncertainty in TB clinical decision-making: patient-, provider-, diagnostics-, and health system-related uncertainty.The framework represents the relationship between the four sources of uncertainty and three key moments in clinical decision-making: the clinical encounter, the formulation of a diagnostic hypothesis, and the treatment initiation.Different types of uncertainty may act synergistically at given time-points (Fig 3). Clinical uncertainty Clinical presentation.Half of the reviews mentioned the relationship between clinical presentation and clinicians' suspicion of TB.Meta-analysis data from GeneXpert and urine 2016-2020 3 Good 1 1 point was assigned for each of the 11 criteria scoring 'yes'.'Good' indicated reviews that scored 8/11 and above, 'fair' indicated reviews that scored between 5 and 7, 'poor' indicated reviews that scored 4/11 and below. Prior to the review, standardized definitions for classification and analysis of interventions were developed as follows: 2 Diagnostic delay: time lag from first access to health system/consultation with provider to diagnosis; 3 Treatment delay: time lag from diagnosis to treatment initiation; 4 Presumptive TB: Previously known as TB suspect, any individual not on TB treatment presenting with any sign or symptom suggestive of TB (clinical signs and symptoms used for inclusion in primary studies and reviews may vary) Note: often referred to as TB suspect across reviews and primary studies; 5 TB case: any individual clinically diagnosed or bacteriologically confirmed with TB at the end of the TB cascade or primary study (timepoints may vary across studies and reviews); 6 Provider: any individual delivering health services, and responsible for: formulating diagnoses/diagnostic hypotheses, and/or prescribing diagnostic tests and/or prescribing treatment across various healthcare settings and levels (including informal healthcare providers). https://doi.org/10.1371/journal.pgph.0003429.t001lipoarabinomannan (LAM) diagnostic impact studies suggested a higher likelihood of being treated empirically for sicker patients requiring hospitalization [43,44].In the review by Getnet et al., findings from observational studies indicated that the absence of cough and the Diagnostics (diagnosis, treatment initiation) • Utilization and impact of diagnostic tools Health system uncertainty Uncertainty emerging from the way services/systems are structured, involving complexities of service delivery such as resources constraints. Diagnostics, health systems (clinical encounter, diagnosis, treatment initiation) • Operational setting deficiencies presence of atypical symptoms, fever, or good clinical conditions were associated with provider diagnostic delay [45].Similarly, patients presenting with chronic cough and other concomitant lung disease including COVID-19 were reported to experience delays [24,46].Socio-demographic characteristics.Three reviews identified several indicators of patient socio-economic status, including poor literacy, low income or unemployment, lack of health insurance, and rural residence, as factors associated with diagnostic delay [45,47,48].The reviews by Yang et al. and Krishnan et al., which focused on gender-related differences in access to TB services and had moderate overlap, reported inconsistent evidence of a positive relationship between female sex and provider delay in TB diagnosis [47,49].Yang et al. additionally reported differences by setting.For example, providers from Thailand and Vietnam were more likely to adhere to diagnostic guidelines with male patients, whereas providers from India offered testing with similar frequency to both women and men [49].The meta-analysis by Getnet et al. showed no evidence of a difference in the proportion of male versus female patients diagnosed with TB at the 30-day mark (pooled odds ratio (OR) = 1.08, 95% CI 0.95-1.23)[45].Similarly, Li et al. reported no evidence of an association between female sex for patients and diagnostic delay in China (pooled OR = 1, 95%CI 0.83-1.22)[48]. TB-related risk factors.Treatment was often delayed in patients with a previous diagnosis of TB [50] and in patients who reported antibiotic usage prior to the clinical encounter [51].Among paediatric patients, providers were less likely to start empiric treatment in cases with unknown TB exposure [25]. Collection of diagnostic specimens.Reviews reported that the inability of patients to produce sputum influenced decisions to initiate testing and treatment [52].In Nathavitharana et al., the proportion of adults able to provide a sputum sample ranged between 57% and 97% in people living with HIV (PLHIV), depending on setting and severity/type of symptoms.In contrast, urine collection (for the LAM assay) was achieved in 99% of PLHIV aged 15 and above across three RCTs [43].Challenges with specimen collection influenced decisions to withhold microbiological testing and either initiate empiric treatment or to exclude TB solely based on the clinical interview or radiological examination findings [52,53].Engel et al., found with high confidence in evidence that providers highly valued the possibility of using alternative samples for testing such as urine or stool, particularly for paucibacillary cases and paediatric TB [52]. Side effects.Reviews also reported provider decisions to withhold or delay treatment initiation because of fear of TB drug side effects in children [25,52]. Personal uncertainty Provider attitudes, beliefs, and stigma.Multiple reviews found that provider behaviour and discriminatory attitudes can impact TB diagnosis and treatment initiation [25,39,49,[52][53][54][55][56][57].In a qualitative evidence synthesis, Barnabishvili et al. reported that providers were less rigorous when interviewing older patients or foreigners during the clinical encounter [39].Provider discrimination towards female patients, resulting in tests underutilization and delays, emerged from narrative syntheses [39,45,48,51].Provider TB/HIV coinfection-related stigma was reported in three reviews as a factor delaying diagnosis or treatment initiation [52,53,55], including one review with high confidence in evidence [52].Fear of infection.Two reviews based on qualitative data, including one review with high confidence in evidence [52], reported that providers were generally aware of the aerosol biohazard and hesitant to test for TB because of fear of acquiring the disease [51,52].Fear of infection from respiratory specimen collection, particularly gastric aspiration, resulted in underutilization of diagnostic tools [52] or collection of poor quality respiratory specimens [51,52].In the context of the SARS-CoV-2 pandemic, some providers refused to collect respiratory specimens among presumptive TB patients presenting with COVID-19 symptoms [46]. Test characteristics and provider preference.Diagnostic accuracy, automation, and computer-based tests were highly valued by providers based on moderate confidence in evidence [52].Among paediatric patients, difficulties in collecting respiratory specimens (e.g., induced sputum or gastric aspirate), invasiveness of the procedure, and the lack of adequately trained staff were reported as barriers to test utilization [52,58]. Relational and knowledge exchange uncertainty Patient-provider communication dynamics.Some reviews reported that provider miscommunication with patients was a potential cause of missed diagnoses [39,42,[51][52][53][54].The difficulty in communicating with the patient was often reported as the consequence of TBrelated stigma, but it also arose from the use of metaphors for clinical explanations, resulting in patients not understanding diagnostic and therapeutic plans, and losses to follow up [53,54].One review reported that male providers disclosed difficulties communicating with, and understanding health concerns from, female patients during consultation [39]. Epistemic uncertainty Provider knowledge and qualification.Qualitative findings from twelve reviews suggested that suboptimal TB knowledge impacted providers' ability to prescribe diagnostic tests or caused providers to delay TB diagnosis and miss treatment opportunities [24,25,48,50,52,54,56,57,[59][60][61][62].In a review on practices and knowledge of Indian providers, Satyanarayana et al. reported that the proportion of providers that suspected TB in the presence of a persistent cough of more than 2-3 weeks duration ranged from 21% to 81%, and less than 60% of patients with persistent cough were advised to undergo sputum examination [59]. A review by Teo et al. reported that poor clinical standards and low levels of knowledge of TB among providers led to delays in TB diagnosis in 12 qualitative studies, with high confidence in evidence [50].Braham et al. reported one primary study where less than 50% of providers were aware of the principal diagnostic tools needed for TB diagnosis [61].Poor TB knowledge and clinical skills resulted in deferral of bacteriological testing and preference for smear microscopy over mWRD, according to the narrative review by Shah et al. [57].Additionally, the same review reported that providers' unawareness and non-adherence to diagnostic algorithms was a reason for missed diagnoses [57]. Health care workers with particularly low levels of knowledge included informal providers [62], public providers working at the primary level, private practitioners with limited awareness of TB, and traditional healers [23,24].One review found that recognition of TB symptoms was associated with providers' level of qualification and public sector employment [42].In contrast, age, sex, years of practice, experience, and level of qualification were not associated with identification of TB symptoms [42,54]. The meta-analysis by Amare et al. of nine intervention trials demonstrated that training interventions improve the ability of providers to diagnose TB, significantly increasing the number of bacteriologically confirmed cases [60]. Availability of policies and guidelines.Lack of clear and updated guidelines and poor dissemination at primary healthcare levels and among private providers led to poor referral to GeneXpert testing, or inconsistency in the types of samples used [52,57,62].The review by Shah et al. reported guidelines and policies variability in the private sector as one cause of missed diagnoses [57]. Test uncertainty Utilization and impact of diagnostic tools.Engel et al. found that in settings where lowcomplexity mWRDs were easily accessible, providers reported a high level of trust in the test result [52].The meta-analysis by Lee et al reported that availability of mWRDs reduced diagnostic and treatment delays [63].Three meta-analyses examined GeneXpert diagnostic impact [44,58,64], with outcomes reported only from the most recent review [44].The use of GeneXpert (versus smear microscopy) had no effect on the proportion of participants treated for TB (risk ratio 1.10, 95% CI 0.98-1.23;GRADE: moderate confidence) [44].This could reflect decisions to treat some patients empirically regardless of test results.The lower sensitivity of Gen-eXpert in paucibacillary forms of the disease, such as paediatric TB, was recognized as a limitation that would justify empiric treatment initiation [58]. Health-system uncertainty Operational setting deficiencies.Twelve reviews reported on challenges at the health systems level.Inadequate staff trainings, lack of diagnostic resources, lack of personal protective equipment and infection prevention control measures, and absence of private rooms for clinical assessment were mentioned as potential contributors to missed diagnosis and treatment opportunities [25,42,46,51,[55][56][57]. Private and rural clinics not offering TB services were associated with diagnostic delays compared with public, urban facilities, where providers had better access to tests and infrastructure [39,50,53,61]. Availability and timing of test results.Sullivan et al. reported missed treatment opportunities in children due to long waiting times for culture results [25].Reviews found that rapid test turnaround time was important to accelerate therapeutic decisions [25,52,56], and that offering same-day test and treat would reduce gaps in missed treatment according to providers [52]. Diagnostic test availability, accessibility, and affordability.Reviews reported that the limited availability of resources for microbiological diagnosis (e.g., due to stock-outs, power cuts, and unreliable supply chains) was associated with GeneXpert underutilization and diagnostic delays [48,51,52,57].Engel et al. reported on providers' perspectives regarding the impact of diagnostic accessibility and affordability on test and treatment decisions.Frequent stock-outs were reported to potentially hinder providers' faith in the adoption of new diagnostics and hamper their reliance on prescribing diagnostic tests in the future [52].Further, some providers disclosed a preference for initiating treatment if patients incurred excessive costs for testing, regardless of test availability [52]. Discussion This umbrella review showed the complexity of multi-level factors that contribute to uncertainty in TB clinical decision-making, often resulting in under-utilization of diagnostic resources, misdiagnoses, empirical treatment or missed treatment opportunities, and diagnostic and treatment delays.The results of this study reinforce the concept that clinical decisionmaking is highly dependent on individual and interpersonal factors (provider, patient), but also closely linked to the operational context and the usability of diagnostic resources.These findings are important to inform the development of successful diagnostic aids and programs implementation strategies, and to improve TB practices in high-burden, resource-limited settings. An important output from this study was the consolidation of a framework to present multilevel factors associated with uncertainty in TB decision-making.We found that several factors related to the local context and often beyond providers' control were responsible for the discrepancy between TB testing and treatment decisions and scientific guidelines' recommendations.Most of the existing literature on TB diagnostics includes diagnostic accuracy studies or randomized controlled trials that do not examine the challenges of clinical decision-making and the impact of health systems factors on diagnostic interventions.Rapid molecular diagnostics such as GeneXpert have had a great influence on TB care but there are ongoing concerns about underutilization and sustainability that need to be addressed [6].Unfortunately, diagnostic tests, despite being cheap, fast, and accurate, are not always used as recommended-or not used at all-in high-burden settings, and it is crucial to increase our understanding of the underlying reasons [8,65]. Reviews reported consistent evidence for patient characteristics, symptom variability and severity as primary sources of clinical uncertainty in TB decision-making [24,47,54,59,61].When confronted with hospitalized patients, patients with advanced HIV disease, or paediatric patients, providers seemed more inclined to treat empirically, regardless of the availability of diagnostic aids, possibly also because of the complexity of obtaining clinical specimens from people in these categories [25,43].Additionally, history of previous TB diagnosis was associated with retreatment delays [50], potentially due to lack of confidence in diagnosis, or fear of drug side effects with injectables [66].Further research is needed to uncover provider-related factors associated with retreatment decision-making, as rapid tests for second-line drug resistance testing and all-oral regimens become available [67,68]. Providers' limited knowledge of TB symptoms and approaches for clinical and diagnostic management, and insufficient familiarity with guidelines, were reported consistently as key contributors to delay in test and treatment decisions [24,25,48,50,52,54,56,57,[59][60][61][62].Epistemic uncertainty affected several aspects of the decision-making, including estimating pretest disease probabilities, deciding to use a diagnostic test, selecting appropriate specimens based on age and disease localization, collecting good quality samples, and interpreting test results [50,59,61].Conversely, the availability of highly qualified physicians, public sector facilities, and ease of access to mWRD had a positive influence on testing decisions [42,52].Notably, training interventions significantly improved case detection and test uptake by providers [60]. The central role of the provider in the decision-making process was also supported by extensive evidence on how interpersonal attitudes, beliefs, stigma, fear of infection, and test preferences affected test utilization and treatment decisions [25,39,49,[52][53][54][55][56][57].Personal sources of uncertainty, including fear of acquiring TB through respiratory sampling, were commonly reported barriers for underutilization of diagnostics [51,52].As seen with other respiratory infectious diseases, fear of infection was mostly associated with poor knowledge of biohazard mitigation strategies, ambiguous guidelines, and lack of resources [69].These findings support the importance of enhancing comprehensive national training and educational programs for providers at all levels of care, and engaging the private sector [61,70].Similarly, the fear of acquiring TB could be, at least partially, addressed through continuous training, and implementation of infection prevention control measures [71]. The high variability of provider-patient interactions during the clinical encounter was often reported as a source of relational uncertainty affecting the outcomes of the clinical decisionmaking process [39,42,53].Provider personal biases could result in the inability or unwillingness to collect all necessary clinical information, diagnostic test under-utilization, misdiagnosis and diagnostic delays, especially when confronted with female patients [48,49,51,61].Although findings from meta-analyses did not confirm the association between female sex and diagnostic delays, moderate-quality qualitative sources reported the impact of gender on clinical decision-making [48,49,51].Gender-related disparities in TB are well-known, especially with regards to health seeking behaviours and retention in care [51].While TB incidence is greater in men [72], women generally face additional barriers related to care access, stigma and psychosocial consequences of the diagnosis [51].The findings from this study confirm the importance of a gender-based approach to TB as advocated by WHO [73].At the same time, quantitative and qualitative studies across settings and countries with different gender norms are needed to gain further insight on gaps in the TB diagnostic cascade, gender inequalities and discrimination, to inform TB interventions that have the capacity to overcome gender barriers [74]. Providers had high confidence in rapid diagnostic tests, but the confidence in mWRDs, namely GeneXpert, appeared to be generated by trust in a computer-based test, rather than from understanding of the technology and knowledge about its diagnostic accuracy [43,52].It should be noted that, paradoxically, a blind use of diagnostics could represent a double-edged sword, if overconfidence in results became a substitute for clinical reasoning [75].The burden of misdiagnosis was also supported by findings from a large autopsy study, demonstrating a high prevalence of TB among children and PLHIV that were missed at clinical diagnosis [76].Evaluating the impact of testing on clinical decisions and empiric treatment [77,78] will be important as missing false negative patients contributes to TB morbidity and mortality, particularly among people who cannot expectorate or who have paucibacillary disease such as young children, where currently available assays have lower sensitivities [79][80][81] Health system uncertainty emerged as an important driver of variability in TB decisionmaking.The unavailability or inaccessibility of diagnostic resources contributed to uncertainty in the decisional process and outcomes [25,52,55,56]. When diagnostic tests were available, several contextual factors, such as poor infrastructure and lack of administrative resources (infection prevention control policies, insufficient trainings), represented barriers to test adoption, shifting the decisional bar towards empiric treatment initiation, particularly in children or very sick patients, or leading to missed treatment opportunities [25,52].The absence of locally tailored guidelines was reported to contribute to epistemic uncertainty and variability in clinical management [52,62].These findings confirm that resource allocation strategies, as well as trainings and guidelines, need to be more inclusive of the lower tiers of the health system [82]. This study also found that providers highly valued the possibility to use non-sputum samples for testing, such as urine or stool [52], highlighting the need for a rapid addition of sputum-free diagnostics, particularly for paucibacillary cases and paediatric TB [83]. In recent years, there has been unprecedented development of novel TB diagnostic technologies.As new products come to market, policy makers must decide which available tools to implement.Findings from this review support the idea that such decisions should not exclusively account for diagnostic assay characteristics (e.g., accuracy), but also consider acceptability and feasibility of tests within the health care infrastructure.As suggested by meta-analyses reporting inconclusive findings regarding the impact of GeneXpert on treatment initiation decisions [44,64], it is key to understand the real-world impact of diagnostics through robust operational research at the point-of-care. Additionally, the increasing utilization of multiple tests or different specimens in parallel, may exacerbate the challenges of results interpretation, particularly in children [84].Understanding how clinicians manage conflicting results will be important to inform clinical algorithms. Recently, significant progress has been made in the development and validation of clinical prediction models and algorithms to help standardize the decision-making process, particularly in contexts not yet reached by new diagnostic tools [85].However, such tools rely on the assumption that a clinical consultation is a standardized event where relevant clinical variables or risk factors would always be disclosed and inform disease probability.Nonetheless, as suggested by the findings of this review, a clinical encounters is an event influenced by multiple uncertainties [39,53,55,56].Hence, it will be important to collect data on real-life performance of such prediction models and algorithms, and to consider setting-specific adjustments and the integration of variables beyond patient clinical and risk factors.At the same time, the complex roots of uncertainty call for integrated efforts by policy makers, researchers, and programs to combine diagnostics research and implementation with staff trainings, guidelines implementation and uptake, infrastructure development, transversal health education to combat stigma and discrimination, and investments at the most peripheral levels of health care systems globally. Strengths and limitations To the best of our knowledge, this is the first study to conceptualize and summarize sources and types of uncertainty in TB decision-making.The umbrella review approach allowed us to triangulate findings from varied study designs and outcomes while preserving high methodological standards.The review was conducted in a systematic manner in accordance with standardized guidance.Nonetheless, some limitations must be mentioned.First, a limitation of the umbrella review approach is our inability to conduct a detailed assessment of primary studies.Consequently, the study relied on the methods and quality of included SRs, many of which were of moderate quality.Most reviews used a narrative synthesis approach, and only a few meta-analyses and one qualitative evidence synthesis reported on the quality of the evidence.Second, it was not possible to perform a meta-analysis of quantitative review findings due to the heterogeneous inclusion criteria and outcome definitions.Third, it is possible that some relevant sources were missed, as grey literature was not included.Finally, the assessment of each record was performed by a single reviewer only, which may yield a lower sensitivity. Conclusion This study summarized the complex network of factors associated with decisional and outcome uncertainty in medical decision-making in TB through a synthesis and thematic analysis of the systematic review literature.Different sources of uncertainty were found to influence provider choices around testing and treatment initiation, often resulting in diagnostic and treatment delays or missed diagnoses and treatment opportunities.Further, the application of a multi-level framework to classify uncertainty revealed the extent to which findings pertaining to different sources and types of uncertainty were intertwined.Gaps in TB diagnosis and treatment suggest the need to integrate evidence from studies that consider variations in healthcare systems and end-users' attitudes, preferences, and experiences with interventions introducing new diagnostic tools.Such considerations are important to improve TB diagnosis and treatment and quality of patient care and to allow impactful introduction of novel diagnostic aids in clinical practice worldwide. The figure summarizes multi-level (patient, provider, health systems, diagnostic tests) factors associated with TB clinical decision-making, identified through thematic content analysis of the SRs.The factors were classified as barriers or facilitators for testing or treatment decisions, and represented using the threshold model [10].Several facilitators positively influenced providers' decisions to test (lower testing threshold), including the presence of typical symptoms and patient history, providers' personal attributes and experiences, workplace (public/urban facility), and available test characteristics.Barriers to testing were the presence of confounding/atypical symptoms, inadequate TB knowledge and staff training, fear of infection, lack of resources, and challenges of respiratory specimen collection.Empiric treatment decisions (treatment threshold) were facilitated by the presence of factors generally associated with an increased risk of severe disease or negative outcomes (young age, severe symptoms), unavailability or inaccessibility (e.g., because of costs) to diagnostic tests, and lack of confidence in tests with low sensitivity.Providers were inclined to withhold treatment decisions if facing with certain elements of patient history (e.g., unknown TB exposure), waiting for test results, and in the presence of negative test results (without considering the possibility of a low negative predictive value). During the clinical encounter, the provider assesses the patient's clinical variables (clinical uncertainty) to determine the disease probability and evaluate therapeutic benefit-harm tradeoffs.Disease probability estimates depend on the provider's knowledge and experience (epistemic uncertainty).Provider's ability to conduct an informative, high-quality clinical assessment is also influenced by patient-provider relation and communication strategies (relational and knowledge exchange uncertainty) as well as by provider's attitudes and beliefs (personal uncertainty).When a decision is made to test, the probability of disease is adjusted based on diagnostic test results (post-test probability).However, a negative test result may be insufficient to withhold therapy, considering the low sensitivity of currently available diagnostic tests and the potential benefit of empiric treatment (test uncertainty).Additionally, the provider may decide not to proceed with invasive specimen collection and testing because of individual risk assessments such as fear of infection (personal uncertainty).Thus, the characteristics of diagnostic tests can impact decision-making.Clinical decisions are further limited by healthcare setting constraints such as lack of skilled staff, poor infrastructure, and scarcity of diagnostic tools (health system uncertainty), and absence of local guidelines (epistemic uncertainty).
2024-07-25T05:16:19.994Z
2024-07-23T00:00:00.000
{ "year": 2024, "sha1": "9088c05df80fc02e966adae6eda541eb64b37b9a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9088c05df80fc02e966adae6eda541eb64b37b9a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
32331589
pes2o/s2orc
v3-fos-license
Nano porous silicon microcavity sensor for determination organic solvents and pesticide in water In this paper we present a sensing method using nano-porous silicon microcavity sensor, which was developed in order to obtain simultaneous determination of two volatile substances with different solvent concentrations as well as very low pesticide concentration in water. The temperature of the solution and the velocity of the air stream flowing through the solution have been used to control the response of the sensor for different solvent solutions. We study the dependence of the cavity-resonant wavelength shift on solvent concentration, velocity of the airflow and solution temperature. The wavelength shift depends linearly on concentration and increases with solution temperature and velocity of the airflow. The dependence of the wavelength shift on the solution temperature in the measurement contains properties of the temperature dependence of the solvent vapor pressure, which characterizes each solvent. As a result, the dependence of the wavelength shift on the solution temperature discriminates between solutions of ethanol and acetone with different concentrations. This suggests a possibility for the simultaneous determination of the volatile substances and their concentrations. On the other hand, this method is able to detect the presence of atrazine pesticide by the shift of the resonant wavelength, with good sensitivity (0.3 nm pg−1 ml) and limit of detection (LOD) (0.8–1.4 pg ml−1), that we tested for concentrations in the range from 2.15 to 21.5 pg ml−1, which is the range useful for monitoring acceptable water for human consumption. Introduction Porous silicon microcavities (PSMCs) allow realizing convenient and low-cost optical devices for the determination of organic solvents and pesticide content with very low concentration in liquid solutions, so that PSMC devices show promise for simple and portable instruments for liquid-phase environment pollutant monitoring. Owing to high specific surface area [1], porous silicon (PS) is an ideal transducer material for sensors of liquids [2,3] and vapors [4,5]. Recently, PS optical sensors have been designed in the structure of one-dimensional photonic crystal devices such as optical filters [6] and microcavities [7]. The principle of these sensors is the determination of the photonic crystal spectral shift caused by refractive index change of the nano-porous silicon layers in the device due to the interaction with liquid or gas. It was shown from the principle that the response of the sensor depends only on refractive index and therefore lacks specificity for the studied substances. Consequently, most current sensors based on the PS photonic crystal only determine the concentration of a defined substance. It is possible to use a physical or chemical method to overcome this drawback. A commonly used chemical method is the functionalization of the surface of silicon nanocrystals in the porous layers [8,9]. It is a chemical process to create the new chemical bond that combines selectively with molecules of the studied substances. The number of published works, which used a physical method to identify the analyses, in sensors based on photonic crystals has been limited. Sailor and co-workers [10] have applied temperature cycles to a porous silica photonic crystal embedded in pure chemical vapors and therefore he was able to distinguish between isopropanol, heptane, and cyclohexane, Patel et al [11,12] have demonstrated the detection of glucose and methyl parathion by nano-scale porous silicon microcavity sensors. The sensitivity of optical sensors, defined as the ratio of the wavelength shift and the change of the ambience refractive index, depends on the concentration change of the solution and can be enhanced by designing suitable structural parameters such as thickness, porosity, number of porous layers in the device [13] or creating a stress on the sensor surface [14]. In our previous work we developed nano-porous silicon microcavity sensor for determination of ethanol and methanol concentration adulterated with gasoline [15]. This paper presents a developed method using a porous silicon microcavity sensor for the determination of organic solvents and pesticides such as an atrazine with very low concentration in the water. We set up the measurement, in which the temperature of the solution and the velocity of the airflow containing the solvent gas from solutions of ethanol and acetone control the response of the sensor. As mentioned above, the sensor uses the physico-chemical properties of the analyzed substances as 'characteristic signals' involved in the response of the sensor. The sensor response is given by the shift of the resonant wavelength of the microcavity when sensors are immersed in the flow of solvent vapor. We present a study of the dependence of the wavelength shift on solvent concentration, velocity of the airflow and solution temperature. Considering this dependence, we hope to find out the ability to enhance the sensitivity of the sensors and the specificity of the measurement. Experimental Porous silicon microcavities were fabricated by an electrochemical method in a process that was presented in our previous work [16]. In particular, the electrochemical process was carried out on a (100)-oriented highly boron-doped ptype Si wafer (resistivity is of 0.01-0.1 Ωcm) in a 16% hydrofluoric acid (HF) solution and ethanol at various current densities. Aluminum was evaporated onto the backside of the Si wafer and then it was annealed at 420°C in nitrogen atmosphere for 45 min in order to ensure a good Ohmic contact. The electrochemical process was controlled by computer program using Galvanostat equipment (Autolab PGSTAT 30) so precise control over electrical current and etching time was achieved. Before electrochemical etching, the Si wafer was dipped in 5% HF solution for a minute to remove the native oxide. The electrochemical anodization cell was made of polytetrafluoroethylene (Teflon) resin and was designed to have an exposed etching area of approximately 0.79 cm 2 . After anodization, the sample was washed with 98% ethanol and dried in primary vacuum. For converting the surface of the silicon nano-crystals from hydrophobic to hydrophilic, we oxidized the as-prepared sample in an ozone atmosphere for 45 min by using an ozone generator (H01 BK Ozone with a capacity of 500 mg h −1 ). Cross-sectional and top view images of the porous silicon microcavity were obtained using an ultra resolution fieldemission scanning electron microscope (FE-SEM) S-4800. Figure 1 shows a plan-view and cross-section images of the microcavity based on (HL) 3.5 LL(HL) 3 porous silicon multilayer structure, where H and L labels correspond to high and low refractive index layers, respectively; 3.5 means three and half pairs of HL. We chose a structure with 3 and 3.5 pairs of HL, because this gives a good reflectivity spectrum and easily repeatable electrochemical etching process. The thicknesses of high and low refractive layers were 72 nm and 87 nm with accuracy of ±2 nm, respectively. This structure was obtained from anodization current density of 15 mA cm −2 and 50 mA cm −2 and with etching time of 5.56 s and 2.86 s for high and low refractive index layers, respectively. For measurement of reflective spectra of the samples, we used an ultraviolet-visible-near infrared (UV-vis-NIR) spectrophotometer (Varian Cary 5000) and a spectrometer (S-2000, Ocean Optics) with a resolution of 0.1 and 0.4 nm, respectively. The light source was a tungsten halogen lamp (Z 19, Narva). Figure 2 shows the reflectivity spectra of the microcavity before and after oxidization. The blue shift of the resonant wavelength after oxidization is due to a decrease in the effect of refractive index of the porous layers in the microcavity [16]. From experimental results we calculated refractive indices of 2.1 and 1.75 for high and low refractive index layers, respectively. Figure 3 shows the schema of the concentration measurement for volatile organics by using a vapor sensor based on a porous silicon microcavity. In this schematic, valve 2 works as a controller of velocity of air stream though the flow meter, the test solvent chamber and the sample chamber. Valve 1 is only open to refresh the porous matrix after measurement. The thermostat controls the temperature of the liquid in the range from room temperature to 100°C. In our experiment we use an optical fiber splitter BIF200 UV-VIS for light irradiation to samples and for collecting the reflective spectrum of the microcavity. We have also used an LM35D integrated circuit for measuring the temperature in the sample chamber. It is shown that this temperature was not affected by the solution temperature nor by the air rate in our experimental setup. Each experimental data run takes from 5 min to 7 min depending on the velocity of the airflow. A standard deviation of the wavelength shift from the average value of 5 experimental data runs is 0.6 nm. Nano-porous silicon microcavity sensor for detection of organic solvents in the gasoline The basic characteristics of the porous silicon microcavity (PSM) and the resonant wavelength shift (Δλ) caused by the ambient refractive index (n) were determined experimentally by using a series of liquids with known refractive indices. The effective refractive index of the nano-porous silicon layer immersed into organic solvent would be increased due to the substitution of air with liquid in the pores and consequently the optical thickness of the layer is increased. As a result, the resonant wavelength shift would be dependent upon the refractive index value of the organic solvent. Table 1 presents a series of organic solvents such as methanol 99.5%, ethanol 99.7%, isopropanol 99.7% and methylene chloride 99.5% (product of NHTC-China) with their refractive index and resonant wavelength of the sensors, when they are dipped into corresponding organic solvent for some minutes. Sensitivity (Δλ/Δn) is one of the most important parameters for evaluating the performance of the sensors. Using the experimental data in table 1, we calculate the sensor sensitivity of about 200 nm RIU −1 . The Spectrophotometer Varian Cary 5000 is able to detect a wavelength shift of 0.1 nm, and then the minimum detectable refractive index change in the porous silicon layer is less than 10 −3 . Experiment shows that after complete evaporation of organic solvent, the reflectance spectra of the sensors return to their original waveform position (as in the air). In our case the evaporation of organic solvents in open air at room temperature was completed for 40-50 min, but this process can occur for 20 s when the samples were in the vacuum chamber with 10 −1 torr. That means, the change of sensor reflectance spectra are temporary and it is useful for reversible optical sensing. An important parameter of the microcavity sensor is the change of the refractive index of the porous layer. It depends on the refractive index of the liquid as well as the porosity of the porous layer. In Bruggeman effective medium approximation, the relation between effective refractive index of the pore layers (n PSi ), silicon refractive index (n si = 3.5), void refractive index (n void = 1) and porosity (P) is presented by following equation Simulation shows that the contrast of the porosities (i.e. refractive indices) of the layers strongly influences the wavelength shift (i.e. the sensitivity) of the microcavity. The contrast of the porosities would be high when the change of current density was large in the electrochemical etching process. However, experiment shows that the imperfection of the interfaces of layers increased with the large change of current densities. In our work, when the porosity contrast of layers is more than 40, the reflective spectra of the device are deformed in the reflection intensity and the line-width of transmittance zone. The curves from C1 to C4 in figure 4 present the fitting process of sensor basic characteristics by simulations with that by experiment (curve E). The fitting showed that the porosity contrast between two layers affects the sensor sensitivity (Δλ/Δn). Consequently, the matching process found suitable porosity of 34% and 72% of low and high porosity layers of the prepared sensor, respectively. The microcavity-based sensors have been applied to determination of different solutions of ethanol and methanol in the commercial gasoline A92. Figure 5 shows the measured results of the resonant wavelength shift of the microcavity sensor immersed into gasoline A92 with different concentrations of ethanol and methanol. In the case of a mixture of ethanol/A92, a resonant wavelength shift is of 3.6 nm, when ethanol concentration changed in the range from 5% to 15% in the gasoline. With the sensitivity of the sensor as described above, the minimum determination of ethanol concentration change in the gasoline is of about 0.4%. In the case of methanol/A92, wavelength shifts are of 7.2 nm between the 5% and 15% methanol mixtures, respectively. From these experimental data, we suppose that the elaborated sensor can distinguish change of about 0.2% in concentration of methanol in the gasoline. Nano-porous silicon microcavity sensor to simultaneously detect organic solvents It is known that the response of the sensor depends on solvent vapor pressure in the sensor chamber [17]. This vapor pressure is related to the vapor pressure of the solvent in the solution chamber through a gas stream flowing through the solution. Assuming that the vapor pressure in the solution chamber obeys the rules of vapor pressure in a closed system, the relation between the wavelength shift (Δλ), the vapor pressure in the solution chamber (P solution ) and the velocity of airflow (V) is crudely presented as is an empirical function of V, which shows dependence of concentration of solution on the velocity of airflow. The P solution can be calculated by the following formulae [18] Equations (2) and (3) show that Δλ is a function of V, X i and P i (T). Below we consider those relations in the experiment. We carried out experiments on ethanol and acetone solution. These are very common organic solvents and some of their physical properties such as boiling point, refractive indices and Antoine's coefficients from [18] are shown in table 2. Figure 6 shows the dependence of Δλ on T, Δλ(T), for acetone and ethanol solutions with various concentrations at the airflow velocity of 0.84 ml s −1 . Equations (2) and (3) show that we can consider the temperature dependence of Δλ(T) as the temperature dependence of P i (T) modified by multipliers X i if it was assumed that the contribution of water to solution pressure is small. Experimental data from curve 1, which describes Δλ(T) of the water, shows the validity of this assumption in our measurement. P i (T) steadily increases as temperature increases (i.e. a monotonically increasing function), so the curves of solvent solutions with various concentrations are separate; for example, the curves 2-4 of ethanol solutions or the curves 5-7 of acetone solutions. Using equation (3) we calculated the rate of change of P i (T) for acetone and ethanol in the range of temperature from 30°C to 50°C and its values are presented in table 2. The slope of P acetone (T) is greater than that of P ethanol (T) in the studied range of temperature (see table 1), so the curves describing Δλ(T) of acetone and ethanol solutions are intersecting with each other not more than one time (for example: curves 3 and 5), or not intersecting (for example: curves 3 and 4). Consequently, a curve describing Δλ(T) characterizes the solution of acetone (or ethanol) at a given concentration. In other words, the dependence of the wavelength shift on the solution temperature discriminates between solutions of ethanol and acetone with various concentrations [19]. Figure 7 shows the dependence of the resonant wavelength shift Δλ(C) on ethanol concentration, when velocity of the airflow (V) and temperature of the solution (T) work as parameters in the measurements. It can be seen in figure 7 that the curve described by Δλ(C) is linear and its slope, i.e. sensitivity of the measurement, increases as V and T increase. These remarks are also deduced from equations (2) and (3) when X i is a variable, and T and V are parameters. Linearity of this dependence is a favorable condition for determination of solvent concentration. The increase of the slope creates an increase in sensitivity received from the measurement. From data in curves 2 and 3, which were received from the measurement with parameters T and V at 45°C and 0.84 ml s −1 , and at 30°C and 1.68 ml s −1 , we obtain the difference of Δλ of about 18.5 nm and 10.0 nm, respectively, between 0% and 100% ethanol. However, having measured with this sensor in the liquid phase in this concentration range, we obtained the Δλ difference of about 5 nm only [20]. Therefore, the sensitivity received from the measurement in the vapor phase with the value of T and V at 45°C and 0.84 ml s −1 , and at 30°C and 1.68 ml s −1 increases 3.7 and 2.0 times, respectively, as compared with that in the liquid phase. We expect that the sensitivity of the measurement can be strongly improved with a reasonable combination of both parameters T and V. Figure 8 shows the dependence of Δλ on V, Δλ(V), at a temperature of 30°C when concentration of ethanol and acetone work as the parameters. It can be seen in figure 6 that curves describing Δλ(V) are separate straight lines with different concentrations of acetone and ethanol. This shows that empirical function θ(V) is a linear function of V. Now, we consider properties of the slopes of curves in figure 8. According to equation (2), the slope of the curve describing Δλ(V) increases as P i and X i increase. We apply the obtained results for curves 2 and 3 received from measurements with ethanol and acetone solutions at the same concentration (20%). It can be seen that the vapor pressure of acetone is larger than that of ethanol (see table 2), so the slope of curve 3 is larger than that of curve 2. We also apply the results for curve 2 and 4 received from measurements with ethanol concentrations at 40% and 20%. The slope of curve 4 is larger than that of curve 2 due to the greater value in the concentration. It is deduced from figure 8 that dependence of the wavelength shift on velocity of the airflow is linear, and the slopes Δλ/ΔV are 2.4 nm ml −1 s −1 and 3.7 nm ml −1 s −1 for the same concentration of 20% ethanol and acetone solutions, respectively. In addition, when the concentration of organic solvent increases, the slopes Δλ/ΔV would be enhanced (for example, the value of Δλ/ΔV enhanced from 2.4 nm ml −1s to 3.4 and 5.1 nm ml −1 s −1 when the concentration of ethanol increased from 20% to 30% and 40%, respectively). Based on this phenomenon, we can simultaneously determine the kind and concentration of organic content in the solutions. For example, 40% ethanol and 20% acetone have similar temperature dependence (see figure 6) but can be discriminated by their air flow velocity dependence, and while 30% ethanol and 20% acetone have similar air flow velocity dependence (as can be seen from figure 8), they can be discriminated by their temperature dependence. Nano-porous silicon microcavity sensor for detection of pesticide concentration in water The very low concentration of atrazine solutions was obtained by dilution from mixture, obtained by stirring 21.5 mg of atrazine in 1000 ml of ultrapure water and a minimum amount of ethanol to ensure solubility (concentration of atrazine was of about 21.5 ppm or 10 −4 M). To determine very low concentrations of atrazine solutions in the range between 2.15 and 2.15 × 10 6 pg ml −1 (from 10 −11 to 10 −4 M), we have measured the cavity-resonant wavelength shift of the nano porous silicon microcavity sensors with various conditions: atrazine in pure water and in an aqueous solution of a humic acid (HA, 0.2 mg ml −1 ) extracted and purified from soil. Humic acid solutions were chosen to represent systems similar to natural conditions where water-containing pesticides also dissolve organic matter as component [21]. When an atrazine solution dropped on to sensor surface, the solution would partially substitute the air in the pores of each layer of sensor device caused a change of its refractive index. We observed a repeatable completely reversible change in the cavity reflectivity spectrum. To test the performance of the optical sensor for the determination of atrazine pesticide, we studied the wavelength shift in the reflectance spectra with various conditions: in air, in pure water and in humic acid (HA). The effective refractive index of the nano-PSMC layer immersed into solutions would be increased due to the substitution of air with liquid in the pores, and consequently the optical thickness of the layer is increased. When the microcavity sensor was exposed to water (with refractive index of 1.3326) and to humic acid (with refractive index of 1.3541), the reflectance spectra promptly shifted towards longer wavelengths by about 39.2 nm and 46.5 nm, respectively. After analyzing the resonant wavelength shift in the reflectance spectra of microcavity sensor in various conditions, we performed the wavelength shift measurements for the determination of atrazine pesticide in water and HA during their exposure to different concentrations (2.15--2.15 × 10 6 pg ml −1 ). It is remarkable that the sensor response depends mainly on two physical factors: the refractive index of the atrazine solution (concentration of atrzine) and its capability of filling the PS pores. The concentration of atrazine in the solution is determined by the wavelength peak shift of the sensor, and the capability of filling the pores is tested by the repetition of measurement values. As shown by experimental results, the resonant peak shift in the reflectance spectra of 1D-PSMC structures for different atrazine concentrations in water from 2.15 to 2.15 × 10 6 pg ml −1 is of 21.1 nm, but the sensor responses are non-linear in a large range of atrazine concentration. Figure 9 presents the response curve of the sensor to atrazine in water with concentration from 2.15 to 2.15 × 10 6 pg ml −1 . The wavelength shift is only linearly increased in the very low concentration of atrazine (from 2.15 to 21.5 pg ml −1 ) in our measurement. Table 3 presents the measurement results of resonant wavelength shift of sensor wetted by atrazine solutions with low pesticide concentrations. The resonant wavelength of sensor shifted on 6.7 nm and 12.3 nm when the concentration of atrazine changed from 2.15 to 21.5 pg ml −1 in water and in humic acid, respectively. It is an important factor for sensor applications that wavelength shift versus atrazine concentration in very low range is linear. Figure 10 shows a linear relation between the different concentrations of atrazine in the very low concentration range from 2.15 to 21.5 pg ml −1 and the resonant peak wavelength shift. In figure 10, each experimental point was the average value of five independent measurements, with the accuracy representing the standard deviation. We could calculate the sensitivity of the sensor as the slope of the linear curve interpolating the experimental points. Thus, we obtained the sensor sensitivity value of 0.3 and 0.6 nm pg −1 ml −1 for atrazine in water and in humic acid solution, respectively. From these measurement results, we also estimated the limit of detection (LOD), as the ratio between the instrument resolution and sensitivity. LOD numerical value is 1.4 and 0.8 pg ml −1 for atrazine in water and in humic acid solution, respectively. In addition, it was observed that the higher wavelength shift was observed in the case of atrazine in HA, because atrazine with HA contains dissolved organic matter as the component having higher refractive index in comparison with water. It is remarkable that the sensor fabricated by our method has significant improvement for the determination of pesticide present in water in comparison with previous works (for example with [12] and [21]). It may be caused by different current densities and etching times for preparation of microcavity samples (i.e. difference in the porosity ratio of low-and high-refractive index layers and layer thickness) and by difference of cavity resonant wavelengths (visible versus infrared). In our case, the experiment had been done for several measurements and the results have good repetition. On the other hand, the obtained results were checked by comparison with electrochemical immunoassays [22] with the same method for preparation of low concentration atrazine sample. In addition, it was observed that, after moving the atrazine solution on the sensor surface and washing it with distilled water, the cavity-resonant wavelength in the reflectance spectra promptly returns to its original position. This is a very good quality of these structures, as it is helpful in the development of reversible sensing devices. Conclusion In conclusion, we successfully built a high sensitivity measurement system for the determination of solvent solutions and pesticide concentration in water by using an optical sensor based on a nano-porous silicon microcavity. We built the basic characteristics of optical sensors by simulation calculations and by experiment based on a series of organic solvents with known refractive indices. Elaborated sensor sensitivity of 200 nm RIU −1 can detect a minimum refractive index change of about 10 −3 . We used these sensors for determination of ethanol and methanol concentration from 5% to 15% in the commercial gasoline A92. For simultaneous detection of different organic solvents, the sensor response is controlled by temperature of the solution and velocity of the air stream flowing though the solution. We studied the dependence of the wavelength shift on solvent concentration, velocity of the airflow and temperature solution for the solutions of ethanol and acetone with various concentrations and in order to enhance the sensitivity and specificity of the measurement. The dependence of the wavelength shift on concentration is linear and sensor sensitivity increases with temperature of the solution and velocity of the air stream. Solution temperature and air flow velocity determine the equilibrium of partial vapor condensation in the pores and then contain characteristics of specific solvent vapor pressure and liquid refractive index and allow discrimination between ethanol and acetone and determine the concentration. This suggests a possibility for simultaneous determination concentration and type of solvent. The nano-porous silicon microcavity sensor is capable to determine the atrazine pesticide with concentration in the range from 2.15 to 21.5 pg ml −1 with LOD of about 1.4 and 0.8 pg ml −1 for environment of water and humic acid, respectively, which makes it practically useful to measure values less than the maximum allowed concentrations in water for human consumption.
2017-09-16T13:08:31.590Z
2014-10-14T00:00:00.000
{ "year": 2014, "sha1": "91357afeeef30451752688ab8c20c046121a0325", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2043-6262/5/4/045003", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "06894167210abc923c2df84081de36fc294475c5", "s2fieldsofstudy": [ "Chemistry", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
253710128
pes2o/s2orc
v3-fos-license
Analysis of characteristics and forecast of unintentional injury deaths of children under age 5 from 2013 to 2019 in Sichuan, China Objective Through the study of death characteristics and trend prediction, it is hoped that key populations, regions and seasons can be identified, thereby providing evidence support for the efficient prevention and control management of unintentional injury deaths. Method We collected information on 8630 unintentional deaths of children under age 5 from local surveillance systems, analyzed by chi-square test and predicted by the seasonal ARIMA model. Results About 33.1% of child deaths were under the age of 1, 60.5% were boys, 37.6% were in urban areas, 2.6% were among ethnic Tibetans, 6.8% were among ethnic Yi, and 46.6% died inside houses. The top three of total deaths were accidental drowning (35.0%), accidental suffocation (32.7%) and traffic accident (15.5%). The ratio of males to females in traffic accidents (1.28:1) and poisoning (1.30:1) deaths was relatively lower than accidental falls (1.62:1) and drowning (1.85:1). The causes of death ratio in rural and urban areas were: drowning (1.83:1), poisoning (1.75:1), suffocation (1.62:1), traffic (1.41:1), and falling (1.24:1). Children's deaths of ethnic minority groups of Tibetan and Yi increased year by year (χ2=75.261, P< 0.001). Tibetan and Yi groups had the most deaths in summer, and Han in winter (χ2=29.093, P< 0.001). Accidental suffocation accounted for 78.2 percent of the total unintentional deaths of children under age 1. And drowning accounted for only 2.4 percent. The model SERIMA (1, 1, 2) (2, 0, 0) [12] is suitable for describing and predicting unintentional injury deaths of children under age 5. Conclusion We should combine death surveillance with qualitative investigation or in-depth quantitative investigation to further analyze unintentional injury deaths in children. Introduction More than 5 million children die every year around the world, and more than 80% of those aged under 5 [1]. Unintentional injuries are the main causes of children's death, resulting in more than half deaths of children in some countries [2]. As the leading killer of child deaths [3], unintentional injuries are more common in less developed regions. And according to WHO, more than 80% of the unintentional injuries occurred in low or middle-income countries [4]. As one of the largest developing countries in the world, China more than 200,000 child deaths each year as a result of unintentional injuries. And as one of China's most impoverished provinces, Sichuan nearly has the country's highest child mortality rate. Reducing children's deaths is part of the Millennium Development Goals (MDG) [7]. The United Nations Sustainable Development Goals propose that preventable deaths of children under 5 years of age should be eliminated by 2030, and every country should strive to reduce the mortality rate to less than 25‰ of children under 5 [8]. Reducing unintentional injury deaths in children, is vital for reducing child mortality. This study intend to study temporal, regional, and demographic distributional characteristics and use scientific methods to predict trends, which will make up for the lack of comprehensive analysis and visualization of the characteristics of unintentional injury deaths among children in Sichuan. Objectives Through analysis of the information in Sichuan, this study hopes to find out the epidemiological characteristics of unintentional injury deaths in local children under age 5, to identify the focus of injury prevention in people of different nationalities, genders, ages and etc., and to make scientific prediction of short-term death trends. And it is hoped that key populations, key regions and key seasons can be identified to provide evidence-based support for efficient prevention and control management of unintentional injury deaths. Data sources Data on child deaths in Sichuan Province from 2013 to 2019 used in this study came from the local maternal and child health surveillance system, which collects relevant data in accordance with Chinese regulations. We applied for 8630 pieces of information of children died of unintentional injuries in all, including cities of death (a total of 21), the household registration (urban and rural), gender (male and female), ethnic gathering area (ethnic-minority areas of Tibetan and Yi and nonminority areas as Han group), the cause of death (drowning, suffocation, traffic accident, poisoning, falls, others), age (≥ 0 and < 1, ≥ 1and < 2, ≥ 2 and < 3, ≥ 3 and < 4, ≥ 4 and < 5), year of death (2013-2019), season (March-May as spring, June-August as summer, September-November as autumn, December-February as winter) and place of death (inside the house, in the hospital and others). Data analysis The "hchinamap" package in RStudio v1.0.143 was used to map the number of accidental child deaths in Sichuan Province. Excel was used to draw circle diagram to reflect the composition of accidental death causes, and the composition ratio of the top four causes was marked. The chi-square test was used to compare the epidemiological characteristics of unintentional deaths by gender, region and ethnicity. The seasonal ARIMA model is used to predict the future trend of total injury deaths per month. P < 0.05 is considered statistically significant. Causes of death by region The unintentional injury deaths of children under the age of five in Sichuan are mainly in the more densely populated eastern regions. Chengdu, the capital city of Sichuan, had the most children's deaths (915). Western areas with higher elevations and lower population densities had the least deaths, such as Ganzi city (108) and Aba city (126). The leading causes of unintentional injury deaths in virtually every city are suffocation and drowning. Areas from northern Mianyang city (36.3%) to southern Zigong city (51.7%) and eastern Guangyuan city (31.9%) to western Meishan city (40.1%) mainly had drowning deaths. The numerous rivers in these above-mentioned areas create additional conditions for drowning. Ganzi (40.7%), Aba (40.5%) and Liangshan (36.1%) in the western areas are gathering places of ethnic minority groups with Tibetans and Yi people, where children are mainly killed by accidental suffocation. Details can be seen in Fig. 1. Trends of different causes of death Total unintentional injury deaths among children under the age of five in Sichuan have been decreasing year on year from 2013 to 2019. Total deaths happened higher in cold weather (around January). Suffocation deaths occur significantly more commonly in winter and drowning deaths occur more generally in summer. Suffocation and drowning deaths are both on a year-onyear downward trend. However, other causes of death do not fluctuate seasonally, and their long-term trends are relatively stable. Details can be seen in Fig. 2 and Table 1. Characteristics of unintentional deaths in children Of the 8,360 unintentional injury deaths of children under the age of five in Sichuan from 2013 to 2019, 33.1% were under the age of one, 60.5% were boys, 37.6% were in rural areas, 2.6% were ethnic Tibetan, and 6.8% were ethnic Yi. The main causes of total death were accidental drowning (35.0%), accidental suffocation (32.7%) and traffic accident (15.5%). A total of 46.6% of children died in the house and 26.0% in hospital. Comparison between different gender About three-fifths of the children who die each year or season from unintentional injuries are boys. Nor does the ratio of boys to girls change by year or season. The proportion of male children's deaths increased with age (χ 2 = 30.078, P < 0.001) among children who died unintentionally under 5 years old. There are significant gender differences in the Comparison between rural and urban areas In 2013, the number of urban deaths was approximately twice as high as in rural areas. And in 2019, the number of deaths in urban and rural areas was nearly the same. Urban areas have always had more deaths than rural areas, but the gap between them narrowed as the year goes (χ 2 = 122.961, P < 0.001). Urban deaths are 1. There are more deaths at home in rural areas and fewer in hospitals than in urban areas (χ 2 = 43.550, P < 0.001). Comparison among different ethnic groups The number of child deaths among ethnic Tibetan and Yi people has increased year on year. However, deaths of non-minority group Han decreased (χ 2 = 75.261, P < 0.001). Tibetan and Yi groups had the most deaths in summer, and Han in winter (χ 2 = 29.093, P < 0.001). Children under the age of one accounted for more than 40 percent of children under the age of five in the Tibetan and Yi ethnic-minority groups, and only about 30 percent in the Han non-minority group. (χ 2 = 49.529, P < 0.001). There were statistically significant differences in the composition of causes of death among different ethnic groups (χ 2 = 164.098, P < 0.001). The top three causes of death for Tibetans and Yi were: suffocation, drowning and traffic, while for Han: drowning, suffocation and traffic. Forty percent of the children died from accidental suffocation and about 17 percent from drowning among the ethnic Tibetan group, compared with 32 percent and 36 percent for the non-minority Han group. Tibetans and Yi people die more at home and less in hospitals than Han people (χ 2 = 33.445, P < 0.001). Details can be seen in Table 2. Comparison among different age groups Accidental suffocation accounted for 78.2 percent of the total unintentional deaths of children under age 1. Drowning accounted for only 2.4 percent of deaths. Among unintentional deaths in children older than 1, drowning accounted for half of the causes of death and accidental suffocation accounted for about 10 percent. Details can be seen in Fig. 3. Prediction of unintentional deaths in children We use time series to analyze the sum of unintentional injury deaths per month. The series is not white noise, with a downward trend and seasonal fluctuations, as can be seen in Fig. 2. Therefore, we use the seasonal ARIMA model to build the prediction model. On the basis of the ACF and PACF Figures (Fig. 4, Fig. 5) of the original series and difference series as references, the model SERIMA (1,1,2) (2,0,0) [12] is finally formed (details can be seen in Fig. 6 Discussion Compared with a study of China 10 years ago [9] which showed that drowning and traffic accidents were the first two causes of total unintentional injury death, this study shows that leading causes are drowning (35%) and suffocation (32.7%). The result is different from that in Turkey [10] whose leading causes are traffic injuries (36.5%) and falls (12.0%) and in Pakistan [11] whose leading causes are drowning (22%), traffic injuries (12%). Results declared in this study that female children who died more and the death proportion of rural areas is higher are similar to that of many other China's studies [12,13], as well as to that of Japan [14] and Iran [15] in Asia, etc.. This study shows that the leading cause of unintentional death under age 1 was suffocation and the proportion of children injured to death under 1 year old was larger than other age groups, which is similar to that of Brazil [16] and the United States [17]. In the study, rural children accounted for the majority of deaths from each cause. A study in India [18] shows similar results. However, an Egyptian study [19] shows different results. More children die from unintentional injuries in rural areas than in urban areas. Given the relatively higher rates of intra-household deaths in rural and ethnic minority areas, it can be speculated that there may still be significant gaps in access to health services between urban and rural areas, and between different ethnic groups in Sichuan. Unintentional death differences exist in children of different ethics in America where non-Hispanic black children died more compared with non-Hispanic white and Hispanic children [20,21]. Results that children in ethnic minority groups are more inclined to die of unintentional injuries appear in Bernard SJ [22] and Gilchrist J's [23] studies where American Indian/Alaska Natives (AI/ANs) and blacks had consistently more total injury death than the White. In this study, children of ethnic groups "Yi" and "Tibetan" died more compared with the largest ethnic group "Han" in China. In recent years, the total number of unintentional injury child deaths in Sichuan has dropped significantly. Drowning and suffocation deaths declined the most, which may be related to previously higher numbers of drowning and suffocation deaths, suggesting that prevention and control of the leading cause of death has achieved great results. However, their rate of decline is becoming slower and seasonal fluctuations remain. Suffocation was the leading cause of death for both rural areas and ethnic minorities in the study, while drowning was the leading cause of death for urban areas and ethnic Han. In accordance with the characteristics that suffocation deaths mainly occur within 1 year old and in cold season as well as drowning deaths occur at 1 year old and above and summer, it's necessary to pay special attention to the prevention of suffocation deaths in infants in winter and drowning deaths in older children in summer. We need to strengthen the In addition, child deaths from poisoning, falls and traffic accidents have not changed much over the years. And with the rapid decline of drowning and suffocation deaths, the proportion of deaths from other causes must increase, and that,should be given equal attention, too. Conclusion This study clarifies the timing, location, and demographic characteristics of children who died from unintentional injuries in Sichuan, making up for the lack of such a complete study in Sichuan. However, despite the large amount of data used in this study, we did not explore the causes in sufficient depth, such as the lack of analysis on emergency treatment measures and the lack of analysis on the construction of accidental injury prevention facilities. In the future, we should combine death surveillance with qualitative surveys or in-depth quantitative surveys to further provide evidence for reducing unintentional injury deaths in children.
2022-11-21T14:44:37.339Z
2022-11-21T00:00:00.000
{ "year": 2022, "sha1": "108b5b14f29ac548a5691d869b5a7e53512e3432", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "108b5b14f29ac548a5691d869b5a7e53512e3432", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
60509795
pes2o/s2orc
v3-fos-license
Information Literacy for Social Workers: University at Albany Libraries prepare MSW Students for Research and Practice In a series of workshops, University at Albany librarians collaborate with the School of Social Welfare to impart information literacy skills to Master in Social Work students. The rationale, curriculum, and embedded ACRL information literacy standards are discussed. Also presented are assessments and a discussion of the challenges of implementation. INTRODUCTION In the summer of 2002, library faculty at Dewey Library (a branch of the University Libraries, University at Albany) and at the university's School of Social Welfare discussed ways to improve information and computer literacy among students in the Master of Social Work (MSW) program, a 60-credit program that is normally completed in two years. The result of these discussions was an agreement that the library would teach information literacy skills to students in a series of workshops. The school's requirements for graduation were changed to state that students must complete a basic workshop on social welfare information literacy by the end of their first 15 credits in the MSW program. By the end of 31 credits they must complete two additional workshops. In addition to the library workshops, students are required to sign up for the MSW Listserv. This article examines (a) the origins of this program, (b) the rationale for the requirement, (c) guidelines for creating the requirement, (d) how the program meets Association of College and Research Libraries (ACRL) information literacy standards, (e) the structure and content of the workshops, (f) assessment of student outcomes, and (g) feedback from students and faculty. As stated by the school on its information sheet, " C o m p u t e r / I n f o r m a t i o n L i t e r a c y Requirement" (August 2005): Social Work is a knowledge-intensive profession where information is essential in decision-making and practice. Information must be evidence-based, relevant, current, clear, accurate, conveniently accessed, and easily communicated. Innovations The Computer/Information Literacy Requirement emerged from the Task Force on Technical Competence, appointed by the School of Social Welfare Curriculum Committee. The Curriculum Committee was concerned about the quality of student research, as well as students' lack of familiarity with word processing, spreadsheet, and presentation software. The requirement was designed to be implemented in a three-step process: signing up for and learning to use the school's Listserv; attending the library's information literacy classes; and computer-use competency in word processing, spread sheets, etc. The last phase has not been implemented. LITERATURE REVIEW A search of Library and Information Science and Technology Abstracts (LISTA), Library and Information Science Abstracts (LISA), ERIC, Social Work Abstracts, and Social Services Abstracts found no reports of similar collaborations between libraries and graduate academic programs in the literature. However, collaborations between librarians and teaching faculty to promote information literacy in graduate students are described in several recent articles. Martha Cooney and Lorene Hiris (2003) describe a collaborative relationship between a librarian and a teaching faculty member in a Spring 2002 graduate-level business class. A unique grading system in which an information literacy competency grade was used to help evaluate each student's research paper resulted in enhanced information literacy skills. In a 2005 article, James D. Hooks reported that graduate students' research abilities improved substantially with the involvement of librarians in an educational cohort-a group of students who move through an educational program together-made up of Master of Education students. In this example, the librarian collaborated with instructors in creating course content and assignments for an off-campus cohort. The librarian was also present in every class, contributing to class discussions and lecturing when appropriate. Additionally, the librarian was available for one-on-one consultations with the students. Faculty in the department of Education and Psychology (EPC) at California State University, Northridge created a set of information competencies for student learning outcomes. Librarians collaborated with faculty by designing three information literacy sessions to teach these competencies to graduate students in EPC. These sessions were all taught by librarians as part of EPC 602, a graduate-level class in research principles. Lynn Lampert (2005) asserts that this model of teaching information literacy skills works because students are "immersed, through assignments and interaction with librarians and discipline faculty, in the totality of all the information competencies that make their field unique and rewarding." Michelle Toth (2005) describes an ongoing faculty-librarian collaboration in designing and teaching a graduate-level research and writing course at SUNY Plattsburgh to help students prepare a required master's thesis. Teaching faculty teach writing and topic formation, research proposal composition, and drafting of human subject compliance applications. The librarian is responsible for teaching research methods and library literacy. Course assessments indicate that students feel that the course has helped them to "make significant progress" on their theses. Another research methods course in which faculty and librarians collaborated is described by Navaz Bhavnagri and Veronica Bielat (2005). This course was designed for elementary and early childhood education master's degree students at Wayne State University. Blackboard courseware was employed to promote selfinstruction. Librarians contributed their technological skills to provide content (identified by teaching faculty) on the Blackboard courseware. One model of collaboration between a library and a department is discussed by F. Grace Xu (2006). In 2004, the departmental library in The School of Social Welfare at the University of Southern California, Los Angeles was transformed into a digital library in which information literacy was provided primarily through online tutorials. Social welfare students in an undergraduate program at Catholic University of America are the subject of an article by Elizabeth Pilonis, Mary Agnes Thompson and Catherine Eisenhower (2005). These students were required to write a capstone paper before graduating but had difficulty doing a substantive literature search. A librarian collaborated with teaching faculty to impart searching and critical thinking skills. THE CLASSES The basic class for the information literacy requirement is a 90-minute workshop, the Social Welfare Research Seminar. The class includes a basic orientation to the University at Albany Libraries (locations, services, using the University Libraries' Web page), conducting research generally (using encyclopedias, dictionaries, thesauri, and the library catalog; finding print and electronic journals), and conducting research in the field of social welfare (discussion and demonstration of databases appropriate for social welfare and use of the Internet for research). The University at Albany Libraries Subject Page for Social Welfare (Brustman, 2007) is introduced. In addition, the instructor covers characteristics of research articles, resources for using APA style, and criteria for evaluating Internet sites. Each workshop includes a time for hands-on practice searching Social Work Abstracts, Social Services Abstracts, or PsycINFO. Students are free to use a topic suggested by the instructor or a project in which they have a research interest. In response to student complaints about time pressure and overlap between workshops, the Social Welfare faculty changed the requirement in Fall 2004 so that students now take the basic workshop and one additional one-hour workshop instead of two. Most Social Welfare Research Seminar classes are taught by the social welfare bibliographer. Richard Irving, the public affairs and policy bibliographer, has also taught some sessions using the outline developed for the workshop. Mr. Irving has had extensive experience providing reference service to social welfare students and faculty and is also the primary instructor for public policy and legal workshops offered by the library. The complete class schedule is available on the library Web site. (University at Albany, Dewey Library, 2007) The one-hour workshops were created by the library to enhance skills learned in the basic seminar or to extend skills into other areas of expertise. Some of these classes are tailored specifically for students who will be in the two major concentrations for study in the Master in Social Work program, Direct Practice and MACRO. On the School of Social Welfare Web site, the Direct Practice Concentration is described as follows: In the Direct Practice concentration, students acquire advanced and specialized knowledge of human behavior, social systems, and intervention processes that will aid them in assisting clients at the individual, group, family or community levels. Students may focus their study in such fields as child and family services, mental health, health care, or aging or may take courses in diverse fields. (University at Albany, School of Social Welfare, undated) The MACRO concentration is designed to prepare managers, leaders, and expert practitioners who are able to meet and anticipate changing demands. Graduates will assume positions such as program planners, clinical manager/program director, researcher/program evaluator, staff development and training, resource developer (fundraising, grantwriting, and marketing), and community organizer/ community developer. (University at Albany, School of Social Welfare, undated) For direct practice students the librarians offer workshops such as Library Research for Evidence-Based Practice and Using the Internet for Research. MACRO students are offered Introduction to Federal Public Policy Research, Non-Profit Organizations--Information Sources, and classes on legal research. General classes for both concentrations include Using the Internet for Research, Using the Library & Internet Research from Home, MINERVA Online Catalog (the OPAC), and Introduction to Research Databases. The library provides an "advice sheet" to recommend classes that will be beneficial for students in one concentration or the other. If students follow the advice sheet recommendations, they have fewer repetitive classes. The Library Resources for Evidence-Based Practice class is taught by the social welfare bibliographer, and public policy and legal classes are taught by the public administration and policy bibliographer. All Dewey Library faculty participate in teaching the rest of the in-depth classes. These in-depth classes, targeted to concentrations in the school, are offered three to four times per semester in the fall and spring. Since many students are time-stressed due to field placements, employment, family obligations, and other personal commitments, some classes were scheduled late in the afternoon or early evening and initially, on Saturdays. At the end of the academic years, when several students had yet to meet the requirement, two of the courses were offered in a self-study worksheet. The worksheets are selfpaced exercises, designed to take approximately one hour. After two semesters, some ground rules were established. For instance, credit would not be granted for those arriving more than 15 minutes late for class, and students were required to keep track of their own sign-off sheets for proof of completion of classes. The social welfare bibliographer kept a list of attendees for the Social Welfare Research Seminar. Eventually attendance will be recorded electronically. Formal assessments of the Social Welfare Research Seminar were conducted during Fall 2003 and May 2006. Survey instruments were created to measure students' comprehension of the material presented and their rating of the value of the program. Each seminar class also has a period for "practice" in which students do a hands-on search on a suggested social work topic (or topic of interest to them). This allows students to see for themselves whether they have mastered the basics of the material on database searching. The instructor checks in with each student during this time. GOALS The library's primary goal for the Social Welfare Research Seminar is to help students effectively and efficiently use library and Internet resources to successfully complete required coursework in the social welfare curriculum. A secondary goal is to expose students to concepts and resources, including use of quality Internet resources. This knowledge will be useful to them not only as students but in their professional careers, when they may no longer have university library privileges. It is hoped that students will become aware of what services the library can provide and become acquainted with librarians and library services. As part of the process of developing and evaluating the Research Seminar the seminar's designers consulted the "Association of College and Research Libraries Information Literacy Competency Standards for Higher Education." (Association of College & Research Libraries, 2006, August 23) Because the goals for this seminar are closely tied to the discipline-based research needs of Social Welfare graduate students, not all of the standards incorporated into the seminar were given equal weight. For example, the bulk of teaching is of concepts related to Standard Two: "The information literate student accesses needed information effectively and efficiently" (Association of College & Research Libraries, 2006, August 23). Nine of the 20 multiple-choice questions on the Fall 2003 assessment survey address students' knowledge of the scope of information resources and students' ability to search the resources using Boolean operators, field limits, and controlled vocabulary. Two additional questions assess students' ability to locate a resource after they select it from a database or the library catalog. Standard Three states, "The information literate student evaluates information and its sources critically and incorporates selected information into his or her knowledge base and value system" (Association of College & Research Libraries, 2006, August 23). This is also an integral component of the Social Welfare Research Seminar, and three questions on the Fall 2003 survey specifically address students' ability to critically evaluate resources. These questions elicited correct answers from more than 90% of the respondents. Students are introduced to information sources that provide background information and terminology specific to the discipline. Student awareness of essential information resources such as subject encyclopedias and the Social Work Dictionary enables them to meet Standard 1: "The information literate student determines the nature and extent of the information needed" (Association of College & Research Libraries, 2006, August 23). Standard Four is "The information literate student, individually or as a member of a group, uses information effectively to accomplish a specific purpose" (Association of College & Research Libraries, 2006, August 23). This standard is accomplished during the hands-on practice portion of the seminar. Students are given a social welfare topic to investigate using some of the databases that have just been demonstrated. Students are given the option to substitute a research topic that they are investigating for one of their classes. ANALYSIS OF DATA Two decisions that had to be made were whether to make the test anonymous and what amount of time would be allowed for students to complete the task. The committee recognized that some students would not put as much effort into an anonymous test and that some students were under great time pressure. At each class the instructor stopped at least 10 minutes early to leave time to complete the assessment. Approximately 80% of the students who completed the survey were able to correctly answer 75% or more of the questions. Questions 4, 8, and 10, which concerned database scope and Boolean operators, were answered correctly by more than 90% of the students. This indicates that the goal of enabling students to use library resources effectively was successful. More than 90% of the students also correctly answered questions about understanding the variety and quality of Internet resources. Students did not do as well with questions that concerned choices between appropriate resources (questions 5, 13, and 14), scoring in the 67-69% range on these questions. In other words, students are not sufficiently adept at research to be able to match specific resources to their individual research needs. The question with which students had the most trouble asked about ways in which students could find out what sources other libraries might own. In retrospect, this might not be an important concept for students to learn. It is probably more useful for students to know that if there is a source that they can't find in the library, they can use the interlibrary loan service. Students provided extensive comments on the assessment. Comments were solicited in three categories: #1. Name one or two things that you learned in this Social Welfare Research Seminar that you did not already know. Comments were plentiful and almost uniformly There were also many comments about the libraries' Web page resources specifically for social welfare; learning about new credible Web sites from governments, organizations, and statistical sites; and the students' new awareness of many library services. #2. Is there anything that was not covered in the Social Welfare Research seminar that you wish was covered? Sixty-seven of the respondents left this part blank, another 25 said "no," and 12 implied "no." A number of students added comments about the class, saying that it was very informative, presented in a clear manner, and useful. They noted that they would like more information on how to conduct specific searches (which is offered in another class), how to use LEXIS-NEXIS (mentioned but not demonstrated), more information on finding statistics, more instruction on full-text sources (covered in another workshop), more on how to look for Internet sources, and a "tour" of library resources in print. #3. Additional Comments. Comments in "additional comments" expressed the view that the class was very helpful, informative, enjoyable, and useful. Students further noted that this should have been part of their undergraduate experience and that they were unaware of all the resources available. Two examples in response to the question on what they had learned that they didn't previously know were: "I feel I will be able to access information with ease" and "Everything but I forgot most of it already. I will be contacting you." In response to the inquiry as to whether anything was not covered that they wish had been, one student commented, "I'll know when I try to do it and get confused, but I walk away confident." Although the authors believe that the current assessment indicates that the library workshops are accomplishing much of their goal, more emphasis on the differences and individual strengths of the social welfare resources and clearer descriptions of the interlibrary loan process should be implemented in future seminars. Recently instructors have placed more emphasis on clarifying these concepts. Additional assessment of student learning outcomes will be administered periodically. In April 2005 the librarians met with the School of Social Welfare Curriculum Committee to discuss the progress of this program. All of the committee members were enthusiastic and supportive of the program. One faculty member noted that there was a clear difference between students who had and had not taken the Social Welfare Research Seminar. Another mentioned that one difference was in their understanding of what constituted a scholarly journal. School faculty suggested new classes in using EndNote or similar software, formatting papers and research tables, and expanding the components on APA style and plagiarism. They approved some suggestions for new class offerings. In May 2006, an additional assessment was conducted. Having taught the program for nearly four years, the instructors were interested in whether student perceptions of the requirement correlated with faculty perceptions and with documented student learning outcomes. Six School of Social Welfare classes composed of second-year students were identified. From those classes, students who would be graduating in May or August of 2006 were asked to fill out the brief survey. Forty-six completed surveys were returned. Results are illustrated in Table 3. The requirement states that students are to complete the initial class, the Social Welfare Research Seminar, during their first 15 credits at the school. Eighty percent compliance by 30 credits seems to be in accord with attendance records collected by the library. Every year a handful of students do not complete the requirement until a few days before graduation. Responses indicated that between 78 % and 89% of the graduating students felt that the information literacy classes had a good to excellent effect on their ability to use information resources effectively. The largest percentage, 89%, responded positively to the survey question about students' ability to use databases effectively, while the lowest positive, 78%, was in response to their increased ability to evaluate Internet resources. Possibly this reflects students' increased confidence and experience with overall Internet resource use prior to taking the classes. Twenty surveys were returned with comments. Many of these comments were concerned with whether there should be such a requirement for graduate students. Eight respondents felt that such a requirement was appropriate for undergraduates or that graduate students would have already learned this material. One student who commented that the workshop should be optional accounted for 2/3 of the "poor" ratings received. It is noteworthy that students filling out the assessment survey in Fall 2003 were very enthusiastic about the usefulness of the class. This assessment was taken immediately after attending the Social Welfare Research Seminar. However, two and a half years later, the May 2006 survey of MSW graduating students indicated that a greater number questioned the need to have a requirement. Students' positive comments included that the seminar was a good basic overview, that it was helpful in finding journals, that additional follow-up sessions would be handy, and that it should be taken in the first semester. Two students took the opportunity to note difficulties with other library services. One suggested that the libraries focus the seminars on particular topics such as child welfare or aging. A detailed report of this second assessment was also sent to the School of Social Welfare Curriculum Committee. CHALLENGES Some of the biggest challenges for this program are presented by the administration of the requirement. The libraries have had problems getting students to take the classes early enough in the MSW program that they will be able to use what they learn in their coursework. Other questions that the libraries grapple with include: Is there a more effective way of communicating to students about the requirement, beyond the school's orientation and signage and the library's Web page and signage? How can a last-minute rush be avoided when several students who are about to graduate have not completed the requirement? How can the school keep better track of students who are not fulfilling the requirement? A number of solutions were discussed with the school Curriculum Committee, including making sure more students sign up for workshops at orientation, presenting information on the requirement more often to students, enlisting faculty to announce and encourage taking the classes early, and beginning some classes in August before students begin their first semester. Both the library and the school have discussed strategies for using a database to track student completion of the requirement. A major issue for Dewey Library librarians in offering these classes is the workload. The library has a small classroom and students with widely varying scheduling needs. Librarians end up teaching to classes of anywhere from one to 16 students. Another challenge is that teaching the Social Welfare Research Seminar can become very repetitive. This problem has been alleviated by enlisting the public affairs and policy bibliographer to teach some classes. A WebCT version of the Social Welfare Research Seminar is now under development and will be tested during the Fall 2007 semester. This will allow students to take the seminar at any time and any place. In the future, WebCT or other technology will be used to offer some of the other courses as well. This program has apparently had an impact on Dewey Library reference services and individual research appointments. While Dewey Library also provides services to three other professional schools and departments (Criminal Justice, Public Administration & Policy, and Information Studies), social work students are by far the heaviest users of reference services. During workshops, social welfare students are encouraged to make individual appointments with the social welfare librarian to discuss their research. Many take advantage of this offer. Generally, the program seems to have increased student comfort level with librarians and library services. Data from the assessment instrument and from faculty comments indicate that this program helps students understand resources available to access social welfare literature and how to more efficiently use these resources and understand the library services that are available to them.
2016-04-07T22:52:55.727Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "dfa0bbd9c15f0138cd63eeb87dcc41e9a6c1caf9", "oa_license": "CCBYNCSA", "oa_url": "https://pdxscholar.library.pdx.edu/cgi/viewcontent.cgi?article=1182&context=comminfolit", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0c66eaf772dfbf0e087956f4b37e36c13a14821c", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
10577276
pes2o/s2orc
v3-fos-license
Acid Sphingomyelinase Gene Knockout Ameliorates Hyperhomocysteinemic Glomerular Injury in Mice Lacking Cystathionine-β-Synthase Acid sphingomyelinase (ASM) has been implicated in the development of hyperhomocysteinemia (hHcys)-induced glomerular oxidative stress and injury. However, it remains unknown whether genetically engineering of ASM gene produces beneficial or detrimental action on hHcys-induced glomerular injury. The present study generated and characterized the mice lacking cystathionine β-synthase (Cbs) and Asm mouse gene by cross breeding Cbs+/− and Asm+/− mice. Given that the homozygotes of Cbs−/−/Asm−/− mice could not survive for 3 weeks. Cbs+/−/Asm+/+, Cbs+/−/Asm+/− and Cbs+/−/Asm−/− as well as their Cbs wild type littermates were used to study the role of Asm−/− under a background of Cbs+/− with hHcys. HPLC analysis revealed that plasma Hcys level was significantly elevated in Cbs heterozygous (Cbs+/−) mice with different copies of Asm gene compared to Cbs+/+ mice with different Asm gene copies. Cbs+/−/Asm+/+ mice had significantly increased renal Asm activity, ceramide production and O2.− level compared to Cbs+/+/Asm+/+, while Cbs+/−/Asm−/− mice showed significantly reduced renal Asm activity, ceramide production and O2.− level due to increased plasma Hcys levels. Confocal microscopy demonstrated that colocalization of podocin with ceramide was much lower in Cbs+/−/Asm−/− mice compared to Cbs+/−/Asm+/+ mice, which was accompanied by a reduced glomerular damage index, albuminuria and proteinuria in Cbs+/−/Asm−/− mice. Immunofluorescent analyses of the podocin, nephrin and desmin expression also illustrated less podocyte damages in the glomeruli from Cbs+/−/Asm−/− mice compared to Cbs+/−/Asm+/+ mice. In in vitro studies of podocytes, hHcys-enhanced O2.− production, desmin expression, and ceramide production as well as decreases in VEGF level and podocin expression in podocytes were substantially attenuated by prior treatment with amitriptyline, an Asm inhibitor. In conclusion, Asm gene knockout or corresponding enzyme inhibition protects the podocytes and glomeruli from hHcys-induced oxidative stress and injury. Introduction Acid sphingomyelinase (ASM), a ceramide producing enzyme has been reported to be involved in the regulation of cell and organ functions and has been implicated in the development of different diseases such as obesity, diabetes, atherosclerosis, kidney diseases and disorder of lipid metabolism [1][2][3]. ASM hydrolyzes sphingomyelin to ceramide and phosphorylcholine and thereby exerts its signaling or regulatory role. It has been reported that ASM deficiency leads to Niemann-Pick disease in humans and that Asm gene (Asm is commonly used to represent mouse gene for ASM) knockout in mice resulted in the resistance to radiation [4] and other forms of stress-induced apoptosis [1]. Similarly, inhibition of ASM activity has also been shown to render cells and animals resistant to the apoptotic effects of diverse stimuli including Fas/CD95 [5], ischemia [6], radiation [7], chemotherapy [8] tumor necrosis factor-alpha (TNF-a) [9]. In addition, Asm knockout or Asm inhibition was shown to have protective action during the lung inflammation and fibrosis [10], cystic fibrosis [11][12], obesity and associated glomerular injury [13], liver fibrogenesis [14] and renal fibrosis [15]. In recent studies, we and others have demonstrated that ASM can be activated during hHcys whereby ceramide is produced to result in activation of NADPH oxidase, local oxidative stress and consequent glomerulosclerosis and loss of kidney functions [16][17][18][19]. However, most of these studies were done using pharmacological or molecular interventions, but to our knowledge no genetic approaches have been used to address the role of ASMceramide regulatory mechanism in the development of hHcysassociated glomerular injury or end-stage renal disease. Recently, the characterization of Cbs gene knockout mice as one of the hHcys model and development of Asm gene deletion in mice [20][21] provide opportunity to address whether genetically manipulation of both genes can alter hHcys-induced pathological changes, in particular in the renal glomeruli, which is a major focus of our laboratory. In the present study, we hypothesized that genetically engineering of Asm gene protects glomeruli from hHcys-induced glomerular oxidative stress and thereby ameliorate podocyte injury and glomerulosclerosis during hHcys. To test this hypothesis, we for the first time generated the mice lacking Asm and Cbs gene (lacking one alle of Cbs and two alle of Asm genes) to determine whether Asm deletion has any affect on glomerular oxidative stress and podocyte injury by hHcys that is occurred in Cbs gene deficient mice. By analysis of Asm homozygous and heterozygous mice with a background of Cbs partially deletion, we tried to obtain gene titration data clarifying the pathogenic role in hHcys. Using culture murine podocytes, we further examined the direct effects of ASM inhibition on Hcys-induced cellular oxidative stress and related injury. These in vivo and in vitro experiments elucidate the role of ASM in the development of podocyte injury and glomerular sclerosis associated with hHcys, which may identify an important target for possible gene therapy during the course of hHcys-induced pathology. Genotyping and Plasma Hcys Concentrations in DKO Mice The genotypes of the mutant mice were confirmed by PCR using Cbs and Asm mouse gene specific primers. As shown in Figure 1B, when Cbs gene primer was used for genotyping, a 321 bp and 1500 bp product could be detected. Mice with only a 321 bp band are wild type (Cbs +/+ ), while mice with two bands are heterozygotes (Cbs +/2 ). In Asm genotyping, mice showing a single product of 269 or 523 bp indicate Asm wild type (Asm +/+ ) and homozygotes (Asm 2/2 ), respectively. If both the two products was detected in the same mice, these mice were heterozygotes of Asm gene (Asm +/2 ). HPLC analyses showed that the plasma Hcys concentrations was similar among Cbs +/+ /Asm +/+ , Cbs +/+ /Asm +/ 2 and Cbs +/+ /Asm 2/2 mice, which showed different Asm gene types with the same background of Cbs wild type. Compared to these Cbs wild type mice, the plasma Hcys concentrations were significantly increased in Cbs heterozygotes with different Asm gene copies, namely, Cbs +/2/ Asm +/+ , Cbs +/2/ Asm +/2 and Cbs +/ 2/ Asm 2/2 mice, but there was no significant difference in plasma Hcys levels in this group of mice with Cbs +/2 background with different copies of Asm gene ( Figure 1C). These data suggest that Asm gene is not involved in the regulation of plasma Hcys levels and therefore it does not alter the occurrence of hHcys in mice. Blockade of Hcys-induced Ceramide Expression and Podocyte Injury by Asm Inhibition in Cultured Podocytes The above studies demonstrated that mice lacking Cbs and Asm gene protect glomerular oxidative stress, injury and podocyte injury. We further performed some in vitro experiments to confirm whether glomerular injury truly occurs in podocytes. Using cultured murine podocytes, we examined ceramide production and related expression of podocyte markers. As shown in Figure 6, immunofluorescent analysis demonstrated that Hcys stimulation increased the desmin and ceramide expression in podocytes compared to untreated cells. Prior treatment with Asm inhibitor, amitriptyline decreased Hcys-induced elevation of desmin and ceramide production in podocytes ( Figure 6A). Another podocyte marker, however, podocin was markedly reduced upon Hcys stimulation in podocytes, and the Asm inhibition almost completely attenuated the decrease in podocin expression ( Figure 6A). The summarized data were shown in Figure 6B. Discussion The major goal of the present study is to determine whether genetically engineering of acid sphingomyelinase (Asm) produces beneficial or detrimental effects in the development of hHcysinduced glomerular injury and sclerosis. We found that the genetic model of hyperhomocysteinemic Cbs mice (Cbs +/2/ Asm +/+ ) enhanced the ceramide production and Asm activity, which was attributed to NADPH oxidase dependent O 2 . 2 production and local oxidative stress in glomeruli and ultimately led to podocyte injury and glomerulosclerosis. These results demonstrate for the first time that mice lacking Cbs and Asm gene (Cbs +/2/ Asm 2/2 ) protect against the hHcys-induced glomerular oxidative stress and injury in mice. We first generated and characterized the mice lacking cystathionine b-synthase (Cbs) and acid sphingomyelinase (Asm) gene by cross breeding Cbs +/2 and Asm +/2 mice. Given that the homozygotes of Cbs 2/2/ Asm 2/2 mice could not survive for 3 weeks, Cbs +/2/ Asm +/+ , Cbs +/2/ Asm +/2 and Cbs +/2/ Asm 2/2 as well as their Cbs wild type littermates were used to study the role of Asm 2/2 under a background of Cbs +/2 that produced hHcys. Previous studies have shown that Hcys levels in the blood are a complex trait that is affected by several genetic and environmental factors. It is known that genetic factors contribute to mild, moderate [22] and severe hHcys [23], and that the genetic background is the specific collection of allelic gene variants that make individuals to present different inheritable characters within species. In this sense, inbred mouse strains are widely used to study the effects of different genetic background on the disease phenotype [24]. In the present study, we tested the role of Asm gene in the development of hHcys-induced glomerular injury or sclerosis by using Cbs +/2 mutant mice. These mice have 50% reduction in Cbs mRNA and enzyme activity in the liver and have a normal plasma Hcys levels by 2 folds higher than that in wild type littermates [19,25]. Thus, the Cbs +/2 mice may develop mild Figure 5. Glomerular O 2 . 2 production in Cbs +/+ /Asm +/+ , Cbs +/+ /Asm +/2 , Cbs +/+ /Asm 2/2 , Cbs +/2/ Asm +/+ , Cbs +/2/ Asm +/2 and Cbs +/ 2/ Asm 2/2 mice. A: Representative ESR spectra traces for O 2 . 2 production in 6 different groups of mice. B: Values are arithmetic means 6 SEM (n = 5 each group) of O 2 . 2 production in Cbs +/+ /Asm +/+ , Cbs +/+ /Asm +/2 , Cbs +/+ /Asm 2/2 , Cbs +/2/ Asm +/+ , Cbs +/2/ Asm +/2 and Cbs +/2/ Asm 2/2 . * Significant difference (P,0.05) compared to the values from Cbs +/+ /Asm +/+ mice; # Significant difference (P,0.05) compared to the values from Cbs +/2/ Asm +/+ mice. doi:10.1371/journal.pone.0045020.g005 hHcys and are a good model to study the hHcys-related disease processes [19]. Indeed, our results showed that plasma Hcys concentration was two fold higher in all Cbs +/2/ Asm +/+ , Cbs +/ 2/ Asm +/2 and Cbs +/2/ Asm 2/2 mice compared to their Cbs wild type littermates. These results suggest that Asm itself is not involved in the metabolism of Hcys. Importantly, we found that the increased plasma Hcys concentration resulted in a remarkable glomerular damage or sclerosis in Cbs +/2/ Asm +/+ mice, but not in Cbs +/2/ Asm 2/2 mice, suggesting that Asm gene knocking out prevents glomeruli from hHcys-induced injury in mice with fewer copies of Cbs genes. There is considerable evidence supporting the critical role of ceramide signaling pathway in the pathogenesis of kidney diseases [18,21]. Ceramide production is mainly mediated via the hydrolysis of membrane sphingomyelin by various sphingomyelinases such as acid sphingomyelinase (Asm) or neutral sphingomyelinase (NSM) or by de novo synthesis via serine palmitoyltransferase (SPT) and ceramide synthase [26]. It is subsequently metabolized into sphingosine by ceramidases, and sphingosine can be further converted to S1P via sphingosine kinase [26] in response to a variety of stimuli including proinflammatory cytokines, oxidative stress, and increased levels of free fatty acids. It has been reported that ceramide may mediate the detrimental or pathogenic actions induced by many different injury factors in different cells and tissues [27][28][29]. More recently, ceramidemediated signaling has been found to cross talk with redox signaling associated with NAD(P)H oxidase, which represents a novel cellular signaling cascade that participates in the development of different diseases [18,21]. In this regard, we recently reported that increased plasma Hcys concentrations enhanced the ceramide production leading to activation of NAD(P)H oxidase in the kidney and that inhibition of ceramide production improved glomerular injury in those hyperhomocysteinemic rats [18]. The present study further demonstrated that Asm gene knockout attenuated hHcys-induced ceramide production, local oxidative stress and glomerular injury in mice lacking Cbs gene (Cbs +/ 2/ Asm 2/2 ). Using podocin as a podocytes marker, our confocal microscopic data showed that hHcys-induced ceramide expression in glomeruli was mostly located in podocytes, as demonstrated by the colocalization of ceramide with podocin. This colocalization was substantially blocked in mice lacking both Asm and Cbs gene (Cbs +/2/ Asm 2/2 ). Furthermore, Asm activity in renal tissues were significantly increased in hyperhomocysteinemic Cbs +/2/ Asm +/+ mice, but not in Cbs +/2/ Asm 2/2 . The increased Asm activity in Cbs +/2/ Asm +/+ mice may at least be partially due to the enhancement of Asm mRNA expression. In this regard, our previous studies have shown that hHcys increased the Asm activity and Asm mRNA expression in renal tissues of Asm +/+ mice but not in Asm 2/2 mice [21]. It was also shown that Hcys stimulation in podocytes enhanced the co-localization of membrane raft and Asm in the plasma membrane and revealed the translocation of Asm into cell membrane upon Hcys treatment [21]. These results suggest that hHcys-induced renal and glomerular ceramide production is mainly caused by activation of Asm in mice. In accordance with lowered ceramide production in Cbs +/2/ Asm 2/2 mice, urinary albumin, protein excretion and glomerular injury and sclerosis were also significantly decreased compared with Cbs +/2/ Asm +/+ mice, suggesting that ceramide-associated renal injury during hHcys is alleviated in these Cbs +/2/ Asm 2/2 . Taken together, these results suggest that Asm gene knockout produces the beneficial effects in hyperhomocysteinemic mice that lack Cbs gene and therefore Asm gene and corresponding signaling pathway could be a therapeutic target for hHcys-induced podocyte injury and consequent glomerular sclerosis. To further explore the mechanisms by which Asm gene knockout protects glomeruli from injury induced by hHcys, we observed more changes in podocyte function in various gene mutant mice. It has been well documented that proteinuria is a hallmark of renal injury and a major deteriorating factor for progression of end-stage renal diseases [30]. The outer aspect of glomerular basement membrane is lined up with very specialized visceral epithelial cells, named podocytes, and these podocytes serve as the final defense against urinary protein loss in the normal glomerulus [31]. Any damage to these podocytes and their slit diaphragm is intimately associated with proteinuria [32]. The assessment of normal slit diaphragm component such as podocin [33] and injured podocyte marker, desmin [34] are now therefore considered as two major sensitive markers of podocyte injury and subsequently glomerulopathy in renal diseases. In the present study, we showed that podocin and nephrin proteins were markedly decreased in hyperhomocysteinemic Cbs +/2/ Asm +/+ mice, but not in mice lacking both Asm and Cbs gene (Cbs +/ 2/ Asm 2/2 ). In addition, we found that desmin was markedly increased in the glomeruli of Cbs +/2/ Asm +/+ mice compared to Cbs +/2/ Asm 2/2 mice. These results further support the view that hHcys-induced glomerular injury is associated with increased ceramide production via Asm and its pathological action on podocytes. Furthermore, several studies have demonstrated that NADPH oxidase-dependent O 2 . 2 production is an early event for Hcys-induced glomerular cell damage and glomerular sclerosis [18,35]. It is possible that hHcys-induced NADPH oxidase activation is mediated by enhanced Asm activity in Cbs +/ 2/ Asm +/+ mice. To test this hypothesis, the present study demonstrated by electron spin resonance analysis that hHcys indeed significantly increased NADPH oxidase-dependent O 2 . 2 production in Cbs +/2/ Asm +/+ mice, but not in Cbs +/2/ Asm 2/2 mice. These results support the view that Asm gene expression and ceramide production play a critical role in mediating glomerular O 2 . 2 production through activation of NADPH oxidase during hHcys. In addition to the whole animal experiments, we also used cultured murine podocytes to examine the direct effect of altered ASM activity on ceramide production and podocyte injury, which attempted to further confirm the role of ceramide and consequent NADPH oxidase activation in Hcys-induced podocyte injury. It was found that Hcys stimulation in mouse podocytes significantly increased the ceramide and desmin expression, but decreased the podocin expression compared to control cell group. However, pretreatment with amitriptyline, an Asm inhibitor attenuated the Hcys-induced ceramide production and podocyte injury. Furthermore, we examined whether the effects of ASM inhibition are associated with Hcys-enhanced oxidative stress in podocytes. It was found that amitriptyline blocked Hcys-induced NADPH oxidase activation. Given that ceramide production is a critical early mechanism initiating or promoting Hcys-induced podocyte injury and glomerulosclerosis [36], these results from cultured mouse podocytes further confirm the findings from our in vivo studies, supporting a conclusion that Hcys-induced podocyte and glomerular injury is associated with increased ceramide production via ASM activity. Another functional abnormality of Hcys-induced podocyte injury was detected in the present study, namely, the production of VEGF-A in cultured podocytes. Podocyte-derived VEGF-A is found to be decreased in sclerotic glomeruli [37], while treatment with exogenous VEGF-A decreases renal sclerotic injuries and restores glomerular capillaries [38]. VEGF-A may serve as a crucial growth factor in maintaining the normal function of podocytes by preventing their apoptosis through the interaction with nephrin and activation of AKT signaling pathway [39]. In the present study, we found that Hcys treatment significantly decreased the production of VEGF-A in podocytes, which was restored by amitriptyline, an ASM inhibitor. In summary, the present study demonstrated that mice lacking Asm gene produced beneficial effects on glomerular injury and sclerosis occurred in mice lacking Cbs gene with hHcys. This amelioration of glomerular injury by Asm gene knockout or ASM inhibition during hHcys suggests the pivotal role of Asm gene expression and ASM activation in hHcys-induced glomerulosclerosis. These findings may potentially direct towards the development of new therapeutic strategies for treatment and prevention of end-stage renal disease associated with hHcys and hHcys-related pathological processes such as hypertension, diabetes, atherosclerosis, and aging. Animals and Genotyping of Mice Cbs +/2 and wild-type mice were purchased from the Jackson Laboratory. We first generated and characterized the mice lacking cystathionine b-synthase (Cbs) and Asm gene by cross breeding Cbs +/2 and Asm +/2 mice after each of original mouse strains was bred more than 5 generations with careful genotyping to maximize their purity ( Figure 1A). Twelve weeks old male uninephrectomized Cbs +/+ /Asm +/+ , Cbs ++ /Asm +/2 , Cbs +/+ /Asm 2/2 , Cbs +/ 2/ Asm +/+ , Cbs +/2/ Asm +/2 , Cbs +/2/ Asm 2/2 were used in the present study. In Cbs 2/2 homozygous mice [40], a genomic fragment of exons 2 and 3, which encodes Cbs putative active site, was replaced by a neomycin selection cassette. PCR confirmation of this genotype was achieved by specific primers designed for wild-type extron 2, 59-TCTGAGGACCAATGTTAGGATG-39 and 59-CTAATGGAACTTCGCCTTGTG-39. For confirmation of Asm gene deletion [41] in these mice, primers 59-CTTGGGTGGAGAGGCTATTC-39 and 59-AGGTGAGAT-GACAGGAGATC-39 were used for genotyping. The genomic DNA was extracted from the mouse tails using the ArchivePure DNA purification kit (5 Prime Inc., Gaithersburg, MD), and the PCR reaction was carried out in a Bio-Rad iCycler, initiated at 94uC for 1 min to denature the template and activate the Taq DNA polymerase, followed by 30 cycles of PCR amplification. Each cycle included denaturing at 94uC for 30s, annealing at 55uC for 30s, and extension at 72uC for 1 min. The electrophoresis of PCR products was performed in 2% agarose gel. All protocols were approved by the Institutional Animal Care and Use Committee of the Virginia Commonwealth University. High-performance Liquid Chromatography (HPLC) Analysis Plasma Hcys levels were measured by HPLC method as we described previously [21,42]. A 100 mL plasma or standard solution mixed with 10 mL of internal standard, thionglycolic acid (2.0 mmol/L) solution, was treated with 10 mL of 10% tri-nbutylphosphine (TBP) solution in dimethylformamide at 4uC for 30 minutes. Then, 80 mL of ice-cold 10% trichloro acetic acid (TCA) in 1 mmol/L EDTA was added and centrifuged to remove proteins in the sample. 100 mL of the supernatant was transferred into the mixture of 20 mL of 1.55 mol/L sodium hydroxide, 250 mL of 0.125 mol/L borate buffer (pH 9.5), and 100 mL of 1.0 mg/mL ABD-F solution. The resulting mixture was incubated at 60uC for 30 minutes to accomplish derivatization of thiols. HPLC was performed with a HP 1100 series instrument (Agilent Technologies, Waldbronn, Germany) equipped with a binary pump, a vacuum degasser, a thermo stated column compartment, and an auto sampler (Agilent Technologies, Waldbronn, Germany). Separation was carried out at an ambient temperature on an analytical column, Supelco LC-18-DB (Supelco; 15064.6 mm i.d., 5 mm particle size) with a Supelcosil LC-18 guard column (Supelco; 2064.6 mm i.d., 5 mm particle size). Fluorescence intensities were measured with an excitation wavelength of 385 nm and emission wavelength of 515 nm by a Hewlett-Packard Model 1046A fluorescence detector (Agilent Technologies). The peak area of the chromatographs was quantified with a Hewlett-Packard 3392 integrator (Agilent Technologies). The analytical column was eluted with 0.1 mol/L potassium dihydrogen phosphate buffer (pH 2.1) containing 6% acetonitrile (v/v) as the mobile phase with a flow rate of 2.0 mL/min. Morphological Examinations The fixed kidneys were paraffin-embedded, and sections were prepared and stained with Periodic acid-Schiff stain. Glomerular damage index (GDI) was calculated from 0 to 4 on the basis of the degree of glomerulosclerosis and mesangial matrix expansion as described previously [43]. In general, we counted 50 glomeruli in total in each kidney slide under microscope, when each glomerulus was graded level 0-4 damages. 0 represents no lesion, 1+ represents sclerosis of ,25% of the glomerulus, while 2+, 3+, and 4+ represent sclerosis of 25% to 50%, .50% to 75%, and .75% of the glomerulus. A whole kidney average sclerosis index was obtained by averaging scores from counted glomeruli [44]. This observation was conducted by two independent investigators who were blinded to the treatment of experimental animal groups. Asm Activity The activity of Asm was determined as we described previously [13,21]. Briefly, N-methyl-[ 14 C]-sphingomyelin was incubated with renal cortical tissue homogenates, and the metabolites of sphingomyelin, [ 14 C]-choline phosphate was quantified. An aliquot of homogenates (20 mg) was mixed with 0.02 mCi of Nmethyl 14 C-sphingomyelin in 100 ml acidic reaction buffer containing 100 mmol/L sodium acetate, and 0.1% Triton X-100, pH 5.0, and incubated at 37uC for 15 min. The reaction was terminated by adding 1.5 ml chloroform:methanol (2:1) and 0.2 ml double-distilled water. The samples were then vortexed and centrifuged at 1,000 g for 5 min to separate into two phases. A portion of the upper aqueous phase containing 14 C-choline phosphate was transferred to scintillation vials and counted in a Beckman liquid scintillation counter. The choline phosphate formation rate (nmol?min -1 ?mg protein -1 ) was calculated to represent the enzyme activity. Liquid Chromatography-electrospray Ionization Tandem Mass Spectrometry (LC-ESI-MS) for Quantitation of Ceramide Separation, identification and quantitation of ceramide in plasma were performed by LC/MS. The HPLC equipped with a binary pump, a vacuum degasser, a thermostated column compartment and an autosampler (Waters, Milford, MA, USA). The HPLC separations were performed at 70uC on a RP C18 Nucleosil AB column (5 mm, 70 mm62 mm i.d.) from Macherey Nagel (Duren, Germany). The mobile phase was a gradient mixture formed as described [45]. The renal lipids were extracted according to previous studies. To avoid any loss of lipids, the whole procedure was performed in siliconized glassware. MS detection was carried out using a Quattro II quadrupole mass spectrometer (Micromass, Altrincham, England) operating under MassLynx 3.5 and configured with a Z-spray electrospray ionization source. Source conditions were same as described previously in our studies and by others [21,45]. Cell Culture Conditionally immortalized mouse podocyte cell line [46], kindly provided by Dr. Klotman PE (Division of Nephrology, Department of Medicine, Mount Sinai School of Medicine, New York, NY, USA), was cultured on collagen I-coated flasks or plates in RPMI 1640 medium supplemented with recombinant mouse interferon-c at 33uC. After differentiated at 37uC for 10-14 days without interferon-c, podocytes were used for the proposed experiments as we described previously [21]. After washing, the slides were incubated with Alexa 555-labeled secondary antibodies for 1 h at room temperature. After being mounted with DAPI-containing mounting solution, the slides were observed under a fluorescence microscope and photos were taken and analyzed. The fluorescent intensities were quantified by the Image Pro Plus 6.0 software (Media Cybernetics, Bethesda, MD, USA) and the data was normalized to control cells. Western Blot Analysis Western blot analysis was performed as we described previously [36]. In brief, proteins from the mouse renal cortex were extracted using sucrose buffer containing protease inhibitor. After boiled for 5 min at 95uC in a 56 loading buffer, 20 mg of total proteins were subjected to SDS-PAGE, transferred onto a PVDF membrane and blocked. Then, the membrane was probed with primary antibody of anti-desmin (1:500, BD Biosciences, San Jose, CA, USA) or anti-b-actin (1:3000, Santa Cruz Biotechnology, Santa Cruz, CA, USA) overnight at 4uC followed by incubation with horseradish peroxidase-labeled IgG (1:5000). The immuno-reactive bands were detected by chemiluminescence methods and visualized on Kodak Omat X-ray films. Densitometric analysis of the images obtained from X-ray films was performed using the Image J software (NIH, Bethesda, MD, USA). Direct Fluorescent Staining of F-actin To determine the role of ASM inhibition in Hcys-induced cytoskeleton changes, podocytes were cultured in 8-well chambers and treated with Hcys (40 mM, 24 hrs). In additional group of cells, the Asm inhibitor, amitriptyline (20 mM, Sigma, St. Louis, MO, USA), was added to pretreat the cells for 30 minutes before the addition of Hcys or puromycin aminonucleoside (PAN, 100 mg/ml, Sigma, St. Louis, MO, USA) for 24 h. After pretreatment with vehicle, amitriptyline, the cells were treated with L-Hcys (40 mM) for 24 h. After washing with PBS, the cells were fixed in 4% paraformaldehyde for 15 min at room temperature, permeabilized with 0.1% Triton X-100, and blocked with 3% bovine serum albumin. F-actin was stained with rhodamine-phalloidin (Invitrogen, Carlsbad, CA, USA) for 15 min at room temperature. After mounting, the slides were examined by a confocal laser scanning microscope. Cells with distinct F-actin fibers were counted as we described previously [36]. Scoring was obtained from 100 podocytes on each slide in different groups. ELISA for Vascular Endothelial Growth Factor A (VEGF-A) in Podocytes After pretreatment with amitriptyline (20 mM, Sigma, St. Louis, MO, USA), and its vehicle, podocytes were incubated with Hcys (40 mM) for 24 h. A specific podocyte injury compound, puromycin aminonucleoside (PAN, 100 mg/ml) was used to treat cells for 24 h to serve as a positive control. The supernatant was collected for ELISA assay of VEGF-A using a commercially available kit (R&D system, Minneapolis, MN). Urinary Total Protein and Albumin Excretion Measurements The 24-hour urine samples were collected using metabolic cages and subjected to total protein and albumin excretion measurements, respectively [13,21]. Total protein content in the urine was detected by Bradford method using a UV spectrophotometer. Urine albumin was detected using a commercially available albumin ELISA kit (Bethyl Laboratories, Montgomery, TX). Electronic Spin Resonance (ESR) Analysis of O 2 . 2 Production For detection of Nox-dependent O 2 . 2 production, proteins from the renal cortex and cultured podocytes were extracted using sucrose buffer and resuspended with modified Kreb's-Hepes buffer containing deferoximine (100 mM, Sigma) and diethyldithiocarbamate (5 mM, Sigma). The NADPH oxidase-dependent O 2 . 2 production was examined by addition of 1 mM NADPH as a substrate in 50 mg protein and incubated for 15 min at 37uC in the presence or absence of SOD (200 U/ml), and then supplied with 1 mM O 2 . 2 specific spin trapping substance, 1-hydroxy-3methoxycarbonyl-2,2,5,5-tetramethylpyrrolidine (CMH, Noxygen, Elzach, Germany). The mixture was loaded in glass capillaries and immediately analyzed for O 2 . 2 production kinetically for 10 min in a Miniscope MS200 electromagnetic spin resonance (ESR) spectrometer (Magnettech Ltd, Berlin, Germany). The ESR settings were as follows: biofield, 3350; field sweep, 60 G; microwave frequency, 9.78 GHz; microwave power, 20 mW; modulation amplitude, 3 G; 4,096 points of resolution; receiver gain, 20 for tissue and 50 for cells. The results were expressed as the fold changes of control. Double-immunofluorescent Staining Double-immunofluorescent staining was performed using frozen slides from mouse kidneys. After fixation, the slides were incubated with rabbit anti-podocin antibody at 1:100 (Sigma, St. Louis, MO, USA), which was followed by incubation with Alexa 488-labeled goat anti-rabbit secondary antibody. Then, mouse anti-ceramide antibody (Enzo Life Sciences, Plymouth Meeting, PA, 1:50) was used to incubate with the slides overnight at 4uC. After washing, the slides were incubated with corresponding Alexa 555-labeled secondary antibodies. Finally, the slides were mounted and subjected to examinations using a confocal laser scanning microscope (Fluoview FV1000, Olympus, Japan). All exposure settings were kept constant for each group of kidneys. Immunofluorescent Staining Immunofluorescent staining was performed using frozen slides of mouse kidneys. After fixation with acetone, the slides were incubated with anti-podocin (Sigma, St. Louis, MO, USA, 1:100), anti-desmin (BD Biosciences, San Jose, CA, 1:50), anti-nephrin (Abcam, Cambridge, MA, 1:50), antibodies overnight at 4uC. Then, these slides were washed and incubated with corresponding Texas Red-labeled secondary antibodies. Finally, the slides were washed, mounted and subjected to fluorescent microscopic examination. The images were captured with a spot CCD camera and a pseudocolor was added to corresponding fluorescent image (Diagnostic Instruments Inc., Sterlin Heights, MI, USA). All exposure settings were kept constant for each group of kidneys. Statistical Analysis Data are provided as arithmetic means 6 SEM; n represents the number of independent experiments. All data were tested for significance using ANOVA for data obtained from multiple animal or experimental groups or paired and unpaired Student's ttest for two groups of animals or experimental protocols. The glomerular damage index was analysed for statistic significance using a nonparametric Mann-Whitney rank sum test. Only results with p,0.05 were considered statistically significant.
2017-06-18T14:03:36.684Z
2012-04-01T00:00:00.000
{ "year": 2012, "sha1": "ae54aa770a9f944595cbee265b0faafa6f91fcd4", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0045020&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ae54aa770a9f944595cbee265b0faafa6f91fcd4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
2890288
pes2o/s2orc
v3-fos-license
Krukenberg tumor in a pregnant patient with severe preeclampsia Krukenberg tumors accompanied by gestational hypertension are rare and have a poor patient prognosis. A gravida 1, para 0 patient was submitted to Tianjin Medical University General Hospital (Tianjin, China) at 32 weeks gestation with symptoms of nausea, vomiting and hypertension. Diagnosis from the gastroscopic biopsy was of a gastric ulcer. A unilateral ovarian mass was identified with B-scan ultrasonography and magnetic resonance imaging, but was confirmed pathologically as a bilateral Krukenberg tumor. Positron emission tomography-computed tomography revealed a high radioactive uptake in the lesser curvature wall of the stomach, and postoperative pathology revealed poorly differentiated adenocarcinoma of the stomach. As Krukenberg tumors are difficult to diagnose, exhibit fast progression and have a poor clinical outcome, developing a greater understanding of Krukenberg tumors is crucial. Imaging manifestations combined with serological examination may aid in early detection, which may lead to improved patient management. Introduction A Krukenberg tumor is metastatic ovarian mucin-filled signet-ring cell carcinoma (1), that exhibits fast progression and has a poor outcome. Krukenberg tumors account for 1-2% of all ovarian tumors (2) and the 5-year survival rate for a patient with a Krukenberg tumor ranges between 12 and 23.4% (3). The morbidity of pregnant patients with a Krukenberg tumor is even rarer and the survival rate is usually poor (2)(3)(4)(5). The present study reported on a patient with gestational hypertension in addition to a progressive Krukenberg tumor. A unilateral encapsulated pedunculated solid and cystic mass with multiple nodules in the solid portion was identified by ultrasound and magnetic resonance imaging (MRI). However, pathological examination confirmed that bilateral adnexa were also involved. The tumor had three features in ultrasonography: increased quickly, had multiple nodular components in the solid portion and had a main vessel with small branches penetrating from the tumor pedicle into the solid portion. Case report Patient history. A 31-year-old female, gravida 1, para 0, was referred to Tianjin Medical University General Hospital (Tianjin, China) at 32 weeks gestation due to nausea and vomiting, with a blood pressure of 155/101 mmHg. The patient had experienced epigastric discomfort for two weeks prior to admission, however, did not see a doctor until the vomiting had persisted for three days. The patient had gained 7 kg in the last month and had previously had regular menses (4-5 days each time with a menstrual cycle of 37 days, dysmenorrhea was negative). The last menstrual period had been July 2, 2012, and the estimated date of confinement was April 9, 2013. A uric human chorionic gonadotropin (HCG) test was positive following 40 days of amenorrhea. Fetal movements were felt at four months gestation and a four dimensional ultrasound examination was normal at 28 weeks gestation. The patient had previously experienced two episodes of stomach bleeding (one year and two years ago) due to unknown reasons, but the gastroscopy examinations had appeared to be normal. The patient's mother had a history of hypertension. Serological tests revealed an increased number of white blood cells and mildly abnormal hepatic and renal function, additionally the D-dimer levels were abnormally high (Table I). A urine test revealed that protein quantitation was 241.6 mg/dl, and positive for urobilirubin (++) and ketone (+). Written informed patient consent was obtained from the patient. Diagnosis. Transabdominal color Doppler examination revealed an encapsulated pedunculated solid and cystic mass (20x18x13 cm) with irregular but clear margins on the front left of the uterus (Fig. 1). A main vessel was observed, with a pulsatility index of 0.76, a resistive index of 0.52 and a time-averaged maximum velocity of 27.63 (Fig. 2). There was a large quantity of fluid surrounding the mass. From MRI, the patient was diagnosed epithelial cancer and the imaging features were consistent with the ultrasound in terms of location, size, external configuration and internal structure (Fig. 3). Tumor markers revealed α-fetoprotein, carcinoembryonic antigen (CEA), cancer antigen 19-9, cancer antigen 125 and HCG levels to be abnormally high (Table II). Gastroscopy revealed a long ulcer lesion, 2.4 cm in length, in the posterior wall of the middle of the gastric body. The diagnosis of the biopsy was of a gastric ulcer. Treatment. At week 38, the patient underwent an abdominal exploration and a healthy male infant weighing 1,300 g was delivered by cesarean section, with Apgar scores of 6 and 8 at 1 and 5 min, respectively. The left uterine appendages were removed, and subsequent pathological examination of the frozen section of the left ovary revealed metastatic poorly-differentiated adenocarcinoma (Fig. 4). Subsequently, a right adnexectomy was performed. Pathological examination revealed minimal invasion of the carcinoma tissue in the right ovary, although the tissue appeared normal from the abdominal exploration and imaging observations. Macropathological analysis of the left ovary revealed multiple nodules (diameters, 0.2-5 cm) in the mass. Immunohistochemistry results indicated that CEA, cytokeratin (CK), CK7 and CK20 were positive (Fig. 5), while ascites were negative. Follow-up. Five days after surgery, the patient underwent a positron emission tomography-computed tomography scan. High levels of radioactive uptake were detected in the lesser curvature wall of the stomach, which indicated gastric carcinoma. The patient underwent a total gastrectomy and regional lymph node dissection combined with intraperitoneal chemotherapy. Postoperative pathology revealed poorly-differentiated adenocarcinoma with signet-ring cell carcinoma of the stomach, which infiltrated the serous membrane. In addition, a cancerous embolus was observed in the blood vessels. Discussion Krukenberg tumors are a type of metastatic ovarian tumor. The primary tumor usually originates from the gastrointestinal tract, primarily from the stomach or the colon and rectum, although occasionally the tumors originate from the breast, uterus, biliary tract, pancreas and kidney (1). The bloodstream, lymphatic system and local implantation are common methods of Krukenberg tumor metastasis (6). The tumor is predominantly solid and often inflicts bilateral ovaries, with a clear border and irregular shape, which occasionally exhibits a single or multiple cystic structure. According to MRI (7) and pathology (1,6) examinations, more than half of solid tumors exhibit a random distribution of multiple nodular components. This is uncommon in primary ovarian tumors and is useful in discriminating Krukenberg tumors from primary ovarian tumors. However, sonographic observations from existing case reports of Krukenberg tumors commonly describe the tumors as roughly uniform solid masses with a strong echo, but lacking internal nodular structure (3,8). Sonographic observations in the present study revealed an encapsulated pedunculated solid and cystic mass, with multiple hyperechoic nodules in the solid portion, which was in accordance with the MRI and pathological results. Furthermore, a main vessel with small branches was observed to penetrate from the tumor pedicle into the solid portion, with high speed and low resistance. Compared with Signet-ring cells may be observed at a higher magnification. Hematoxylin and eosin staining; magnification, x100, and x400 for the last image other imaging examinations, color Doppler sonography is safer, inexpensive and more convenient for diagnosing these diseases, particularly in pregnant patients (9). Thus, the sonographic observations of Krukenberg tumors are important. The level of ovarian hormones during pregnancy and a rich blood flow contribute to tumor metastases. The most common clinical manifestations of Krukenberg tumors during pregnancy are nausea and vomiting, which are similar to symptoms often experienced during pregnancy, thus, may be neglected. As the gestational period increases and the circumference of the waist increases, the pelvic neoplasm and ascites are difficult to locate (4). Therefore diagnosis is often delayed, as demonstrated by the present case. Disease progression of Krukenberg tumors is often fast, thus, numerous patients present with metastatic carcinoma earlier than the primary tumors. If a patient has a history of primary gastrointestinal tumors or gastrointestinal symptoms, including stomach bleeding, and suffer from nausea and vomiting in the middle or later stages of pregnancy, a Krukenberg tumor may be considered as a differential diagnosis. With regard to the patient in the present case, the four dimensional ultrasound examination was normal one month previously, but a 20x18x13 cm mass was located 4 weeks later. The quick progression may correlate with severe preeclampsia, which is predominantly caused by placental hypoxia ischemia (10). Placental hypoxia may release a variety of soluble factors, leading to a high dynamic blood condition and accelerated renal blood flow, potentially promoting tumor progression. Examining the images in the current case only located the ovarian neoplasm on the left side, but pathological diagnosis indicated that the carcinoma had also invaded the right ovary. This is due to tumor metastasis, particularly from the stomach, which is usually via a blood or lymphatic channel and leads to bilateral adnexa involvement. However, the degree of involvement varies. Certain tumors may be located by image examinations, while others cannot. Therefore, the diagnosis of a Krukenberg tumor should depend on pathological confirmation. Once unilateral lesions are located by image examination, it is important to be aware that contralateral metastasis may also exist. Thus, more care should be taken during abdominal exploration, as resecting all metastases is important to improve the patient prognosis. In conclusion, Krukenberg tumors are difficult to diagnose, progress fast and have a poor outcome, thus, improving the understanding of Krukenberg tumors is of particular importance. When a solid and cystic mass is identified in a pregnant woman by ultrasound, with multiple hyperechoic nodules and a main vessel with small branches in the solid portion, that may be a Krukenberg tumor instead of a primary tumor, further examinations should be performed to substantiate the diagnosis and identify the source, this may help doctors to chose the best remedy and gain more survival time for the patient.
2016-05-04T20:20:58.661Z
2014-03-12T00:00:00.000
{ "year": 2014, "sha1": "cd1626eb1369c550bb377ecb6440ed3bc0314784", "oa_license": "CCBY", "oa_url": "https://www.spandidos-publications.com/etm/7/6/1476/download", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd1626eb1369c550bb377ecb6440ed3bc0314784", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11676256
pes2o/s2orc
v3-fos-license
Efficient implementation of a real-time estimation system for thalamocortical hidden Parkinsonian properties Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies. Parkinson's disease (PD) is a kind of neurodegenerative disease characterized by the degradation of substantia nigra dopaminergic neurons [1][2][3][4] , and the cellular mechanisms inducing neuronal death are still unknown. The most significant symptoms are movement disorders such as shaking, slowness of movement, rigidity and problems with walking and gait. They are closely related with the loss of the ability of thalamocortical (TC) neuron to relay the excitatory sensorimotor cortical information. In fact, the loss of such ability is caused by the low-frequency pathological rebound bursts in TC neurons [5][6][7][8][9] . In a TC neuron, low-threshold T-type calcium current is essential to the generation of the rebound bursts induced by the excessive γ -Aminobutyratergic projections from the basal ganglia 10 . The dynamics of the low-threshold T-type calcium current are considered as the thalamocortical hidden properties. Besides, the thalamus is a vital gateway to the neocortex in which sensory pathways to the cortex go through appropriate thalamic nuclei [11][12][13][14] . Previous experimental paradigms have been proposed to explore the functions of thalamus under the neurodegenerative state of movement disorders. At the cellular and microcircuit levels, intracellular and extracellular recording techniques including voltage clamp and dynamic clamp are widely applied in electrophysiological experiments 15,16 . In comparison with voltage-clamp techniques, the dynamic clamp technique can verify more sophisticated hypothesis in electrically excitable neurons. It has been used to apply conductance to the neurons to investigate the mechanism underlying rhythmical bursts, which is useful in the research of PD 17 . Besides, researchers have used the dynamic clamp in TC neurons in vitro to explore the effects of the ionic current on bursting activities under the Parkinsonian state 18 . As a result, the dynamic clamp technique is useful in the research of the Parkinsonian mechanism. The dynamic clamp technique uses the measured membrane potential to control the amount of current injected into a neuron. Because the membrane potential changes faster than the hidden variables, the amount of the injection current is higher and changes fast. It would hurt the neuronal physiological structure and cost more energy. Therefore, estimating the hidden variables in a neuron can assist dynamic clamping. However, there remain two challenges that would limit the performance and application of dynamic clamp techniques in the research of the movement disorders. Firstly, the critical hidden properties underlying the membrane potential for the investigation of the pathological states cannot be observed directly by current electrophysiological dynamic clamp techniques. Previous studies have revealed that the dysrhythmia of PD is generated by T-type calcium channel deinactivation in the Parkinsonian thalamus that is reflected by the variations of the concentration for T-type calcium ions in the TC relay neurons 6,[19][20][21] . The prediction of the slow variable in the neuron model is also essential in the neural control engineering (NCE) and brain-machine interface (BMI) projects [22][23][24] . Secondly, hardware-based dynamic clamp systems are limited by their poor programmability, while software-based systems cannot guarantee the real-time performance 25,26 . As a result, there exists a demand for a novel approach with the advantages of high computational efficiency as well as programmability to remedy the disadvantages of the conventional implementations. During the past decades, the unscented Kalman Filter (UKF) has been used as an efficient method for the dynamical estimation of the nonlinear systems [27][28][29] . Previous studies have used the UKF algorithm in the estimation of spatiotemporal cortical dynamics 30,31 . Considering the importance of thalamus on the investigation of PD, it is useful to estimate the thalamocortical ionic state parameters from the membrane potentials contaminated with noise. However, the UKF has shortcomings when applied to TC dynamical estimation. The states of the ion channels change rapidly during the process of external stimulation, which requires the latency of the UKF to satisfy the demand of real-time dynamical tracking constraints. Moreover, the UKF algorithm is highly complex and involves a large volume of data and for each iteration, numerous matrix multiplications and inversion operations are performed. These factors limit the UKF in physiological applications. Aiming at speeding up the on-line computational performance of the UKF, a hardware implementation with high computational capacity is required. The complex computation is a challenge for a fixed-point processor system, especially for the implementation on a hardware chip. Manifesting the advantages of low energy consumption, high reliability, parallel processing and fast time to market [32][33][34]35,36 , the Field Programmable Gate Array (FPGA)-based implementation shows promise for a neural estimation system with higher performance. In order to estimate the experimentally inaccessible dynamics of the neuron, the UKF algorithm is required to be implemented into the neuron model 23,24,31 . As a result, another challenge in the neural dynamical estimation is the implementation of the nonlinear neuron model to reconstruct unobserved intracellular variables and parameters only from measured membrane potentials. Biologically conductance-based computational model of the TC neuron has a number of nonlinear functions with multiplications and sigmoid functions, which cannot be used in the parallel-structured implementation of UKF algorithm. The optimization of this model is necessary for the implementation of the proposed estimation system. In this paper, we present a real-time thalamocortical dynamical estimation (RTDE) system to explore the mechanisms of the movement disorders by estimating the hidden properties in a noisy measurement environment with an improved computational performance. The system receives the membrane potentials of the TC neuron model and outputs the estimation results of the hidden properties. Since the UKF system needs to be applied into a conductance-based TC neuron model, a cost efficient TC (CETC) neuron model is proposed, which is implemented with low hardware overhead to achieve real-time execution. To the best of our knowledge, no previous works have proposed a real-time system for TC dynamical estimation of the ion channels. Our study is certainly of interest because it facilitates the explorations of the mechanisms of the Parkinsonian state, and the performance enhancement of the current electrophysiological technique such as dynamic clamp technique. The general scheme of the proposed work, targeting the estimation of the pathological states, can be applied in the studies based on the dynamic clamp technique. The proposed system can be also useful for the performance improvement of neuromodulation and investigation for the mechanisms of kinds of diseases including Huntington's disease, epilepsy and Alzheimer's disease [37][38][39][40] . The proposed study is the first prototype of a hardware-based platform that uses a real-time neural dynamical estimation system to track the biological characteristics of the thalamus underlying the neural firings. This work provides a new perspective for neural engineering, and can be further applied in NCE and BMI projects. Results System description. A schematic diagram of the experimental setup and the general overview of the electronic system are proposed in Fig. 1. This platform is established to evaluate the performance of the proposed RTDE system. The RTDE system is equipped with analog-to-digital conversion (ADC) device to receive the input signals of the thalamocortical membrane potentials and digital-to-analog conversion (DAC) device to output the analog estimation results of the hidden properties to the oscilloscope or the DAQ device. The analog outputs can be also acquired by a data acquisition (DAQ) device, which will be visualized on a personal computer or used in neuromodulation systems. The proposed hardware implementation of the RTDE system is divided into three parts, which are the prediction module of UKF, the transformed points acquirement module and the updating module of UKF. The detailed description of each module is proposed in the following sections. In terms of control logic of the RTDE system, a finite-state machine (FSM) is used as a block of combinational logic that determines the state transition. The cortical excitatory input current from the brain sensorimotor is implemented in the module of sensorimotor input current. The transformed points acquirement module contains eight digital TC neurons that are implemented in a parallel hardware topology. The updating module of UKF receives the observed membrane potentials using an ADC device. The estimated hidden properties are output from the prediction module of UKF to the peripheral DAC device. Since second-order UKF algorithm is employed and the estimated neuron model has three state variables and one estimated parameter, eight computational modules are demanded. Scientific RepoRts | 7:40152 | DOI: 10.1038/srep40152 The hardware-oriented CETC neuron model and its dynamical characteristics. In this study, a conductance-based TC neuron model is considered for the establishment of the RTDE system, with the gating variable m evolving on a much faster time-scale than variable V. This allows for model reduction and simplification, because m will approach the asymptotic value m ∞ very quickly. Thus, m can be replaced by m ∞ in sodium ion channel dynamics that enter into the voltage equation. Further, the potassium activation variable n is replaced by 1-h and TC relay neuron model is reduced to a three-dimensional model. The equation of the membrane potential V can be expressed as: where I L , I Na , I T and I K are leak, sodium, low-threshold calcium and potassium spiking currents respectively. I Gi→Th is the synaptic current from the globus pallidus internus (GPi) neuron to the TC relay neuron. I SM stands for sensorimotor input to the thalamus and takes the form SM SM SM SM SM where H is the Heaviside step function, such that H(x) = 0 if x < 0 and H(x) = 1 if x > 0. Besides, ρ SM is the period of I SM , δ SM is the duration, and i SM is the amplitude of positive input. The mathematical model of the TC relay neuron is based on the studies by Terman et al. 6 . The membrane capacitance C m is unity. The two gating variables, which are the hidden properties of the TC relay cell, are described as where the functions f 4 The variable "h" is the gating variable that represents the probability that an inactivation gate for sodium ions is open in the TC relay cell, and "ω" represents the gating variable in the T-type calcium ionic channel. The gating variable is the variable that can switch the channels between open and closed states in a neuron model. The ionic currents are defined as where the corresponding functions The nonlinearity in the original neuron model is a big challenge to achieve a cost-efficient hardware implementation of the biological conductance-based neuron model. The conventional method to implementing the nonlinear parts of the biological neuron model is based on the look-up table (LUT), which requires massive hardware resource. In order to further improve the computational efficiency and reduce the implementation cost in the RTDE system, a CETC neuron model is proposed to directly address this issue. In the CETC neuron model, the nonlinear functions are approximated with linear functions with the form: where M is the total number of sample points. The variable f lin is the linearized approximation and f ori is the original function. The normalized cost function for error assessment is given by where f max is the maximum value of the modified function, and f min is the minimum value. Another important measure in model error evaluations is MAE defined as where the absolute error |e i | = |f lini − f orii |. The number of sample points M for each function is M = 1000 (see Supplementary Table S2). According to the results, the proposed CETC model has ideal precision with the mean ERR CF = 0.0128 and MAE = 0.0696 for the CETC model. The NERR CF % is 1.9932% in comparison with the original model. To evaluate the neural dynamics of the CETC neuron model, the investigation of the ionic current provides an insight into the level of similarity through comparisons with the original model as shown in Fig. 2. The ionic currents are regarded as a function of voltage and the steady-state currents are calculated with the slow variables equal to their steady-state voltage values for fixed voltages. As shown in Fig. 2(a) and (b), it can be seen that the ionic currents of the CETC model are consistent with the original model. Error analysis of the dynamics of the CETC neuron model is considered in the proposed study, which is depicted in Fig. 2(c). We select ten thousand sampling points to obtain considerably reliable values of the errors. The Parkinsonian dynamics have regular bursting with rebound firing under a single stimulation, and all three of the error criteria are higher than those under the normal state except for the variable "ω" under the Parkinsonian state. Figure 2(c) indicates that the CETC model has an acceptable accuracy with a reliable dynamics. Comparisons between the three variables of the original model and CETC model under the normal and Parkinsonian states are given in Fig. 2(d) and (e) respectively, which reveal that the CETC model can accurately reproduce the thalamocortical dynamics. The difference in the neuron model between the normal and Parkinsonian states is the value of the input current "I gi→th ", which is described in previous studies by Terman et al. 6 . A stereoscopic image is used to plot the firing trajectory of the TC relay neuron. Figure 2(f) and (g) show the stereoscopic images of a spiking and bursting trajectories of the original and CETC models in the three-dimensional phase space (V, h, ω). When the firing mode of the TC neuron is regular spiking, the trajectory of the neuron is a limit cycle. When the TC neuron bursts, the bursting trajectory slides along the bold half-parabola based on the locus of stable equilibrium with the variable ω slowly decreasing. The dynamics of the CETC and original models are consistent as shown in Fig. 2. The dynamical responses of the CETC neuron model under external stimulation are shown in Fig. 3. The TC neurons cannot fire spontaneously, and Fig. 3(a) shows that the TC neurons respond to positive depolarizing currents with continuous spikes with larger applied currents yielding faster responses. Figure 3(b) demonstrates that the TC neurons fire with strong rebound bursts following release under sustained negative hyperpolarizing currents and stronger rebound occurs with larger hyperpolarizing inputs. The TC neuron will faithfully follow a periodic external stimuli input over a wide range of input amplitude and frequency. The results reveal that the dynamical response of the CETC model is the same with original models. The original three-dimensional model using the conventional LUT-based method has been introduced by Yang et al. 34 . The resource cost can be significantly reduced thanks to the piecewise linear approximation approach and by replacing multipliers with barrel shifters and adders (see Supplementary Table S3). A barrel shifter is a digital circuit that can shift a data word by a certain number of bits using only pure combinatorial logic instead of any sequential logic. Results in Figs 2 and 3 show that the CETC model can both maintain the biological dynamics and reduce the hardware resource cost, which enables the RTDE system implementation and broadens the applications in the neuromorphic engineering. The application of the UKF in the CETC neuron model. In order to predict the hidden properties of the TC relay neuron, the UKF algorithm is required to be implemented into the TC neuron model. The TC neuron model is not just used to test the proposed system as the observation. It will be also implemented in the RTDE system to estimate the hidden properties. The procedure of the RTDE system and the task of each module in the system are described as follows: The prediction module of UKF. For an N-dimensional estimated state x, we choose the sigma points from the mean value x and covariance P xx as: where P xx is the estimated covariance matrix and N P xx is the matrix square root. Sigma points are the sample points at the boundaries of a covariance ellipsoid. This procedure is implemented in the prediction module of UKF in the proposed RTDE system. Transformed points acquirement module. The function G is applied to the sigma points with results = X G X ( ) 1, 2… 2N). The observation of the new state is represented by =Ỹ M X ( ) i i . In the RTDE system, the nonlinear functions G(X) and M(X) use the CETC neuron model to yield transformed points and the observations, which are implemented in the transformed points acquirement module. To implement the UKF into the neuron model, the augmented state vector x as a N = p + n dimension vector composed of p parameters and n dynamic variables. In this paper, in order to estimate the synaptic current from GPi neurons to TC neurons, the external applied current I ext is considered as a time-varying parameter and inserted into the state vector, which consists of the synaptic current I Gi→Th and the sensorimotor input I SM . Thus, the process equations of the Kalman filter would be: Since the only measured variable is the membrane potential V of the TC relay neuron, the measurement equations would be: where C = [0 1 0 0]. By augmenting the observed state variables with system parameters and unobserved state variables, the UKF can track and estimate both the system parameter and unobserved variables. The updating module of UKF. The updating module of UKF in the RTDE system assimilates noisy measurements to update the system state and covariance. The mean values are defined as which stands for the a priori state estimate and a priori measurement estimate respectively. The a priori covariance of the ensemble members is defined as follows: The updated x and  P xx will be used for the next iteration. Estimation results of the RTDE system. In the proposed study, we estimated the thalamocortical hidden properties within the UKF framework by assimilating the membrane potentials. The RTDE system uses FPGA-based UKF to reconstruct the hidden properties of the TC relay neuron from measured membrane potentials of a digital neuron, which is implemented based on the CETC neuron model. In Figs 2 Figure 4 shows the estimation results of the dynamical behaviors of the TC neuron under the normal and Parkinsonian state respectively. The observation signal represents the noise-contaminated membrane potential that is used as the observation of the RTDE system. The performance of the UKF dynamical tracking strategy using the CETC model provides credible results. The proposed RTDE system is implemented on a Stratix-Ш EP3SE260 FPGA, which uses a total of 768 18-bit DSP block elements. A timing requirement needs to be translated into static timing constraints for an FPGA to be able to handle it. Thus, the timing constraints are applied to the hardware design and set up the clock period on the FPGA by using the phase-locked loop (PLL). A PLL is a feedback control system that automatically adjusts the phase of a locally generated signal to match the phase of an input signal. The clock division factor of the PLL, which defines the division ratio between the input clock frequency and the output clock frequency of the PLL, is set to be 20 in the proposed RTDE system for an Scientific RepoRts | 7:40152 | DOI: 10.1038/srep40152 appropriate system speed. The original model would require a greater than 50% increase in 18-bit DSP block making it impractical to be used in the UKF-based RTDE system. With the proposed CETC model, the UKF system can be implemented in the Stratix-Ш EP3SE260 FPGA with low resource cost. Analysis of the experimental results is proposed to evaluate the estimation performance of the UKF. The estimation results are displayed using stereoscopic image of spiking trajectories under both normal and Parkinsonian states. In Fig. 4(c1) and (c2), the true and estimated values are plotted using black and red lines respectively. The absolute error e i is calculated to investigate the performance of the estimation of the three TC neuron model variables. The values of "e i " under the normal and Parkinsonian states are depicted in Fig. 4(d1) and (d2) respectively. The error analysis reveals that high-performance estimation is obtained using the proposed RTDE system, thereby suggesting further application in electrophysiological experiments. Performance analysis for the RTDE system. In order to explore the effects of the process noise and observation noise covariance matrices Q and R on the estimation results and evaluate the estimation error of the algorithm, we use the cost function for RMSE, which is defined as where n is the number of estimated points, and x est,i and x tru,i stand for the estimated (est) and the true values (tru), respectively. The boxplots in Fig. 5 show the points, L-estimators, interquartile range, midhinge, range, mid-range and trimean. The estimation error of the membrane potentials increases with increasing observation noise R, or with the decreasing process noise Q, Fig. 5(a1) and (b1). The noise has a significant effect on the estimation error of the variable h, Fig. 5(a2) and (b2), similar to the effects of the noise on the variable V. Figure 5(a3) and (b3) shows that the observation noise R does not have a large effect on the estimation error of the slow variable ω. The time to reach steady-state will be longer as Q decreases, or with increasing R, so there exists a conflict between the estimation error and the time to steady-state. In the proposed design, we choose the UKF parameters Q = 0.00005 and R = 5. In order to investigate the estimation performance of the proposed RTDE system, we added different kinds of noises in the model data to mimic realistic measurement environments. As shown in Fig. 6(a), CF rmse for noiseless observation is close to zero. Then we add the noise to mimic the measurements of the membrane potentials of the TC relay cells 20,[41][42][43][44] . In Fig. 6(a), different kinds of noise are added to the observation. The noise includes the Gaussian white noise, industrial frequency noise (50 Hz), odd harmonic noise (150 Hz), high frequency noise (350 Hz) and mixed noise. The mixed noise is the commixture of the Gaussian white noise, industrial frequency noise, odd harmonic noise and high frequency noise. When the observation is noise-corrupted, the reconstructed hidden variables still have a good approximation to their true values. Besides, we investigate the effects of the noise strength on the estimation performance of the membrane potentials as shown in Fig. 6(b). Figure 6(c) shows the effects of the noise strength on the estimation error for the thalamocortical hidden properties. Both the normal and the Parkinsonian states have been considered. The effects of Gaussian white noise, high frequency noise and odd harmonic noise on the estimation performance are limited to a low level, as reflected by a small value of CF rmse . However, the estimation error under the industrial frequency noise increases significantly with the increment of the noise strength. The error induced by the mixed noise is larger than the errors under the other four kinds of noise and will increase more intensively with the noise strength increasing. The industrial frequency noise is the major factor of the greater error of the estimation performance under the mixed noise in both the normal and the Parkinsonian states. Fortunately, a notch filter can be used in the practical application to remove the industrial frequency noise. Thus, the errors of the estimation results for both the membrane potential and the hidden properties can still be quite small even with big noise strength, which suggests a good estimation performance of the proposed system to deal with the complex measurement environment such as electrophysiological experiments. Moreover, another main difference between the model-based data and the real data is the uncertainty of the unknown parameters in the real data. In order to explore the effectiveness of the UKF for a realistic system, a set of real data is of great importance. However, the major difference between the model-based data and the real data is reflected in the uncertainty of the unknown parameters in the real data. This uncertainty may increase the difficulties in dynamical estimation of the real data. In order to investigate the estimation performance on the real data, a double-blind experiment is introduced in our work. The double-blind experiment is that the implementation of the UKF-based system is not dependent on any parameters of the TC neurons, and meanwhile the voltage series of TC neurons is also generated independent on any parameters of the RTDE system. In the double-blind experiment, the parameters used in the model in the RTDE system are not the same with the observation and all the parameters of the observation are unknown. Since in a dynamic clamp experiment the injected current into a neuron is known, we can apply an external current to replace a sum of the synaptic current I Gi→Th and the sensorimotor input current I SM , depicted by the solid red lines as shown in Fig. 7(b) and (d). The sensorimotor input current I SM takes the form of a series of monophasic current pulses whose amplitude is selected as 5pA/μ m 2 and duration is 5 ms. The instantaneous frequency of the input current follow a gamma distribution with an average rate of 30 Hz and a variation coefficient of 0.2, which induces the current I SM to simulate the non-regular nature of the input current from the cortex to the thalamus 45 . By choosing R = 1, Q = 0.005 for the normal state and R = 0.5, Q = 0.5 for the Parkinsonian state, the estimation works well as shown in Fig. 7(a) and (c). It is worth noting that only the TC membrane potentials are observable in the proposed double-blind study. In order to evaluate the estimation performance, the injecting currents are also estimated and the estimation results are shown as dotted black lines in Fig. 7(b) and (d). We noticed that there exist fluctuations in the reconstructed injecting currents, which is because the mapping functions for sigma points given by equation 10 may varied from a real TC cell. The fluctuations may be resulted from the lack of frequency adaption current in the model using in the design of UKF. Since the focus of this study was to estimate the complex thalamocortical hidden properties using the simple model, we have ignored this feature in the model in this study. However, from the viewpoint of the estimation results of the hidden slow variable, although the model in the UKF is simple, the estimation still performs very well for observation data without known parameters. Thus, it can be concluded that the proposed UKF system is appropriate to the estimation of the underlying information from a real recording. Discussion The Parkinsonian state is characterized with the synchronized bursting phenomenon of the basal ganglia network. In this study, we propose a method along with techniques to obtain hidden properties of the TC relay neuron to investigate the mechanism of PD. To the best of our knowledge, real-time implementation of the dynamical estimation for a TC relay neuron has been rarely reported until now. We tested the implementable hardware system using the reproduced biological firing patterns with noise from a digital TC neuron. Experimental results reveal that the proposed system can effectively estimate the hidden properties of the digital TC neuron with high precision. However, a limitation of the proposed work is the lack of the validation using real data. Thus, in order to further explore the effectiveness of the RTDE system for a realistic neuron, a double-blind experiment is introduced in our work. It is shown that the estimation still performs well. In the future work, we will focus on the applications of our RTDE system, especially for the application in the electrophysiological recordings. This result opens a pathway for the future design of neural dynamical estimation about the TC relay neuron by overcoming the high hardware cost, scalability and computational efficiency challenges, which is meaningful for the investigation and neuromodulation of the pathological states of the movement disorders. Besides, the proposed RTDE system is significantly beneficial for the performance enhancement and application extension of the conventional dynamic clamp system, which will be helpful for the revelation of new aspects and marvels of neural system dynamics. In the implementation of the RTDE system, the UKF algorithm is required to be implemented into the neuron model to obtain transformed points and observations. A critical challenge to overcome for the real-time estimation system lies in the implementation of the complicated TC model. Major limitations of the paralleled model implementation include the number of available fast multipliers and the random access memory (RAM) resources on a single FPGA chip 33,34 . We propose a novel CETC neuron model to replace the nonlinear functions of the high-dimensional TC neuron model with relevant dynamics for the high-performance digital implementation. The experimental results showed that the CETC model requires reduced hardware resources and accurately reproduces the thalamocortical dynamical characteristics in both the normal and Parkinsonian states. Numerous studies have proposed FPGA-based implementation of the realistic neural networks with different hardware structures and methods for the real-time emulations of the large-scale networks [46][47][48][49] . The real-time emulations of the large-scale neural networks are of vital significance to understand how the brain transfers, decodes and processes information 50,51 . The CETC model with lower hardware overhead is useful for establishing the large-scale thalamus network, which is meaningful for further investigations of the basal ganglia and related movement disorders. Although researchers have attempted to obtain better performance of control strategies using various filtering algorithms, successful applications are limited by the lack of sufficient computational capacity 22 . The ability to implement a system-on-a-chip control platform using the RTDE system in NCE studies is a key advantage of the proposed technique. The closed-loop control strategy of the TC relay neuron based on FPGA has also been implemented; however they cannot be used in the practical applications due to its limitations in estimating the hidden properties 34 . Using the proposed RTDE system, the hidden properties can be used in the closed-loop hidden-variable-based (HVB) control to provide a significant enhancement of control performance in comparison with open-loop control and closed-loop membrane-potential-based (MPB) control (see Supplementary information C and Supplementary Fig. S2). Moreover, previous experiments with dynamic clamp techniques have focused on some kinds of diseases including Huntington's disease, epilepsy and Alzheimer's disease [37][38][39] . Some neuromodulation studies have also employed the dynamic clamp system to investigate the effects and mechanisms of the neuromodulation on the neural system 19,40 . Thus, the proposed technique can be useful in the explorations for other kinds of diseases by replacing the neurological model in the proposed framework, and can improve the effects of the neuromodulation approach. Another important application of the proposed estimation system is the decoding process in BMI projects 20,22,52 . The UKF algorithm of Li et al. 22 and the kernel autoregressive moving average algorithm of Shpigelman et al. 53 have applied non-linear models of neural tuning in closed-loop BMI, which paves the way for the application of nonlinear filtering algorithms in BMI applications. This work, which combines the estimation algorithm with the neuron model using real-time measurements from membrane signals, will be meaningful for the further development of the BMI for thalamus decoding. The problem in the application of the proposed system in BMI project is the design of algorithms that convert neural signals into control signals for the force-feedback device, which remains to be solved in future studies. In terms of the future clinical applications, the proposed real-time platform is particularly applicable in TC driven prosthetic devices due to its reliable performance. While our estimates contribute to the explorations of the mechanism of PD, the technique and approach developed in this paper is also expected to be easily applicable to a wide variety of other diseases characterized by rapidly developing neurodegenerative dynamics, such as epilepsy or other kinds of neurological disorders. Methods Implementation of the UKF-based RTDE system. Unlike previous studies in which only the decoding portion is designed on a FPGA, both decoding and encoding portions of the UKF algorithm are implemented in the proposed system, which facilitates the use of our system for online neural dynamical tracking or control using a portable device. The parallel system architecture is shown in Fig. 8(a). The "P xx _init" and "x_init" module initializes of the covariance matrix P xx and the mean value of the estimated state x. These two initialization modules will work in the first step of the proposed system and then work as registers. The Cholesky decomposition algorithm (see Supplementary information B) is implemented in the "Cholesky Decomp." module. The "X i _calc" module calculates the sigma points X i , and the eight "digital TC neuron" modules are used to propagate each sigma point from time step t to t + 1 yielding transformed points and the observations. The digital TC neuron uses the CETC neuron model to reduce the computational resources. The estimation of the new mean values based on transformed points is calculated in the "x_calc" module and the estimation of the new covariance is implemented in the " ∼ P xx _calc", " ∼ P xy _calc" and " ∼ P yy _calc" modules. The "Parallel Mult. " modules containing 2N multiplication blocks in parallel are used to compute the Kalman gain matrix K, updated mean and covariance matrix. The "1/a" module is designed to calculate the reciprocal value of " ∼ P yy ". The "X i _calc", "x_calc", " ∼ P xx _calc", " ∼ P xy _calc" and " ∼ P yy _calc" modules are implemented based on the UKF algorithm descript in equations 9, 12 and 13. The digital structure of the Cholesky algorithm is shown in Fig. 8(b), which is a parallel architecture feasible for the FPGA-based implementation. In the "DIV" block for the division operation, the fixed-point divider can only output the quotient and remainder separately, so the numerator will be enlarged by a barrel shifter and divided by the denominator, and then the quotient is decreased by the same time for the division result with a barrel shifter. The fixed-point square root block can only output the integer square root, so the same approach is used in the "Sqrt" blocks. The "MUL" block represents the fixed-point multiplication operation and the "SUB" block stands for the fixed-point subtraction operation. Digital implementation of the TC neuron model. In the presented design, the Euler method is used to discretize the TC relay neuron model for simulations. Three blocks are used to compute the derivatives of the variables V, h, ω, and the modified ordinary differential equation for V is discretized into the equation: Similarly, the ordinary differential equations for two gating variables h and ω can be discretized as follows: Scientific RepoRts | 7:40152 | DOI: 10.1038/srep40152 where k indexes the integration steps and Δ t is the time step in the Euler-based discretized equations. The parameter C m in equation 16 is set to 1 pF/μ m 2 in this study. The synaptic current I Gi→Th is determined based on the state of the neuron. The sensorimotor input I SM takes the form of a square wave with a period of 25 ms, duty ratio of 20% and amplitude of 5 mV. Pipelining technology is a significant approach to increase the throughput of the hardware system. There are three variables contained in the digital pipeline of TC neurons, so the hardware topology of the TC neuron has three pipelines and three buffers as shown in Fig. 9, which should be synchronized with each other at each clock pulse. In Fig. 9(a) the "V" pipeline module calculates the pipeline of variable "V" in V_p stages, which affects the computational cost in the digital implementation. The "h" and "ω" pipeline modules implement the pipelines of the variables "h" and "ω" in h_p and ω _p stages respectively. The V_Buf, h_Buf and ω_Buf store the variable values in the three pipelines separately. Figure 9(b,c and d) show the detailed digital structure of the "V", "h" and "ω" pipeline modules respectively. The fi_block (i = 1, 2, … , 8) implements the linearized function in the CETC model. The "multiplication" operations with parameters are replaced by "add" and "shift" operations. Multiplication between two variables cannot be replaced by the "add" and "shift" operation, so the implementation of the proposed model must include some multipliers in its implementation. Fig. 9(f), which employs a set of logic elements. The proposed method reduces the total block memory bits significantly. The number "n" in Fig. 9(f) is the segment number of the piecewise linear functions. The bus builder block is used to construct the output from inputs with a single bit. The module of sensorimotor input current is implemented based on FPGA as well to reproduce the cortical excitatory input current from the brain sensorimotor, and its detailed hardware structure is shown in Fig. 9(g). It is implemented using the digital logic elements of FPGA based on equation 2.
2018-04-03T02:57:37.042Z
2017-01-09T00:00:00.000
{ "year": 2017, "sha1": "3a24dce07b51eb122c10d54fc9d0ea010c5e8cc1", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep40152.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a24dce07b51eb122c10d54fc9d0ea010c5e8cc1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
14807915
pes2o/s2orc
v3-fos-license
Irrigation of Castor Bean (ricinus Communis L.) and Sunflower (helianthus Annus L.) Plant Species with Municipal Wastewater Effluent: Impacts on Soil Properties and Seed Yield The effects of plant species (castor bean (Ricinus communis L.) versus sunflower (Helianthus annus L.)) and irrigation regime (freshwater versus secondary treated municipal wastewater) on soil properties and on seed and biodiesel yield were studied in a three year pot trial. Plant species were irrigated at rates according to their water requirements with either freshwater or wastewater effluent. Pots irrigated with freshwater received commercial fertilizer, containing N, P, and K, applied at the beginning of each irrigation period. The results obtained in this study showed that irrigation with effluent did not result in significant changes in soil pH, soil organic matter (SOM), total kjeldahl nitrogen (TKN), and dehydrogenase activity, whereas soil available P was found to increase in the upper soil layer. Soil salinity varied slightly throughout the experiment in effluent irrigated pots but no change was detected at the end of the experiment compared to the initial value, suggesting sufficient salt leaching. Pots irrigated with effluent had higher soil salinity, P, and dehydrogenase activity but lower SOM and TKN than freshwater irrigated pots. Sunflower showed greater SOM and TKN values than castor bean suggesting differences between plant species in the microorganisms carrying out C and N mineralization in the soil. Plant species irrigated with freshwater achieved higher seed yield compared to those irrigated with effluent probably reflecting the lower level of soil 1113 salinity in freshwater irrigated pots. Castor bean achieved greater seed yield than sunflower. Biodiesel production followed the pattern of seed yield. The findings of this study suggest that wastewater effluent can constitute an important source of irrigation water and nutrients for bioenergy crop cultivations with minor adverse impacts on soil properties and seed yield. Plant species play an important role with regard to the changes in soil properties and to the related factors of seed and biodiesel yields. Introduction There is a growing interest worldwide with regard to the use of renewable energy sources as a mean to both reduce the environmental pollution associated with energy produced by fossil fuels and to tackle the problem of the depletion of these fuels.At present, in the European Union (EU) renewable energy sources account for only 4.5% of the total energy consumption, however, a target of 20% has been set for 2020.In addition, it is hoped to increase biofuel consumption to 10% of the fuels used in road transportation by 2010 [1].The increase in biofuel production is expected to offer environmental and economic benefits to European communities by promoting rural employment and incomes, reducing greenhouse gas (GHG) emissions, and by improving energy supply and security.The most common biofuels are biodiesel and bioethanol, mainly produced from biomass or renewable energy sources, with biodiesel representing 82% of the total biofuel production in the EU [2]. Currently, the major biodiesel feedstocks are edible-grade vegetable oils originating mainly from soybeen, rapeseed, sunflower, mustard, and palm.A major drawback of these bioenergy crops is the fact that the intensive cultivation and large scale production of vegetable oils for biodiesel production leads to significant increases in oil prices and an imbalance in the food market, particularly in developing countries.Moreover, these crops may compete with traditional food crops for land and available water.In order to mitigate these problems, unconventional bioenergy crops and the oilseeds produced are being investigated as alternative feedstocks.Castor bean (Ricinus communis L.) is considered to be one of the most promising non-edible oil crops due to its high potential for annual seed production and its tolerance to diverse environmental conditions.In addition, castor bean can be grown on marginal lands which are usually unsuitable for food crops [3][4][5]. The establishment of bioenergy crops in marginal or degraded lands may offer additional environmental benefits, such as protection from soil erosion and nutrient leaching, and improvement of soil properties [6].Of particular interest is the use of bioenergy crops as a vegetative filter to purify wastewater effluents applied to the soil.This practice is also known as land treatment systems (LTS) or slow rate systems (SRS) and meets both environmental and renewable bioenergy goals [7,8].Effluent can supply bioenergy crops with considerable amounts of water and nutrients which stimulate plant growth and yield.In addition, effluent application can reduce the competition between bioenergy crops and traditional crops with respect to the use of fresh water, and it can also decrease production cost due to substitution of water and fertilizers [9]. The irrigation of bioenergy crops with wastewater effluent gives rise to serious environmental concerns which need to be addressed.The prime concern regards the release of nutrients to the environment, particularly nitrogen (N) and phosphorus (P), and their potential to adversely affect the quality of water resources and the climate due to the accumulation of nitrates/phosphates in the soil and the increase in greenhouse gaseous emissions (e.g., N x O) [10][11][12].Crops play a defining role in N cycling with effects on the effluent application rates and the amount of N entering the soil, the proportion of N recovered, and microbial communities mediating N turnover in the soil [13][14][15].With regard to P, however, the effect of crops becomes less strong because of the catalytic role of soil on the sorption and transfer of P and the relatively low potential of crops for P assimilation in plant biomass [15,16].Recent work has shown that the selection of crops with high water use efficiency (WUE), as determined by low biomass potential, and the adoption of appropriate management practices may reduce N and P release to the environment in sites irrigated with relative strong effluent [15]. Wastewater effluent may result in significant changes in soil properties, thus influencing the overall functioning of the soil [16].Increased levels of soil salinity and sodicity and adverse changes in physical properties have been reported in sites under effluent application in range dependent on the quality of effluent, irrigation rates, and soil properties [7,17,18].Soil microorganisms and enzymes activities have been found to respond rapidly to these changes [19,20].Enhanced activities are often attributed to the organic matter and nutrients from wastewater effluent, whereas the adverse effects are associated with the presence of harmful constituents in the effluent, such as heavy metals or aromatic polycyclic hydrocarbons.Among enzymes, dehydrogenase, which is an important intracellular enzyme in metabolic reactions in living soil microbes, is considered to be an appropriate indicator of microbial activity in soils treated with effluent or other polluted waters [20]. The effects of plant species (Ricinus communis L. versus Helianthus annus L.) and irrigation regime (freshwater versus secondary treated municipal wastewater) on soil properties and on seed and biodiesel yields were studied in a three year pot trial.Knowledge provided from this study is expected to help in the better understanding of the soil and yield response of bioenergy crops grown under different irrigation regimes, particularly those irrigated with wastewater effluent, leading to suitable plant species and better irrigation water/wastewater effluent management, ensuring sustainable yield with minor adverse environmental impacts. Materials and Methods The study was carried out in an experimental field of NAGREF in Iraklion, Crete-Greece.The climate is semi-arid with relatively humid winters and dry, warm summers.Meteorological data (temperature and rainfall) were obtained from a station next to the experimental field and are shown in Figure 1.In May 2007, seeds of castor bean (Ricinus communis) and sunflower (Helianthus annus) were planted in large pots (40 cm in diameter × 50 cm in height).Previously, the pots were filled with top-soil from an adjacent fallow field and were placed in a square of 1m intervals.The soil was characterized as clay-loamy (CL) (Sand 39%, loam 28%, and clay 33%) and the chemical properties were pH: 7.81, electrical conductivity (EC) (dS/m): 0.40, soil organic matter (SOM) (%): 2.40, total kjeldahl nitrogen (TKN) (%): 0.19, Olsen-P (ppm): 24.00.Plant species were irrigated with secondary treated wastewater effluent or freshwater and were arranged in a randomized complete block design with three treatments (plant species (2), water quality (2), and soil depth (3)) and seven replicates.Pots were located at a distance of 1 m from each other.The experiment was conducted for three consecutive growing seasons (2007)(2008)(2009) and at the end of each season plant species were removed from the pots and replaced with new seeds at the beginning of the next season.The effluent used in this study was collected from the biological treatment unit of the city of Iraklion on a weekly basis and was transferred through a plastic tank to the experimental field, where it was discharged and stored in a 500 L tank.The freshwater used was supplied from the irrigation system of NAGREF.The average composition of the effluent and freshwater applied to the pots is shown in Table 1.The application rates of the effluent and freshwater were based on crop water requirements estimated separately for each plant species using tensiometers reading from depth of 30 cm.The soil water potential was never allowed to fall below −40 kPa so that irrigated crops did not experience water stress.The hydraulic loading rates applied in this study were higher than those of reference evapotranspiration of the area [15] due to higher soil temperature since the pots were exposed to the sun.The application of effluent and freshwater was carried out manually with the use of volumetric flasks so that equal quantities of liquid were applied to all pots in every wet-dry cycle.The differentiation of the hydraulic load among different plant species is due to variations in the irrigation frequency resulting from different growth pattern and water needs.The method of application was thoroughly tested with freshwater during its establishment and at the beginning of every growing season to validate that flow was uniform within pots.The pots treated with freshwater also received commercial fertilizer, containing N, P, and K, applied at the beginning of each irrigation period.The planning of fertilization was mainly to achieve equal amount of nitrogen added between effluent and freshwater pots.The effluent and freshwater loads as well as the amount of N, P and COD applied to each plant species at the end of the three growing seasons are given in Table 2. Samples of effluent and fresh water were collected from the outlet of storage tanks, on a weekly basis, transferred to the laboratory, and analyzed for pH, EC, TKN, total phosphorous (TP), and COD.Sample preparation and the relevant analyses were done according to Standard Methods for the Examination of Water and Wastewater [21].In addition, soil samples from 0-10, 10-20, and 20-30 cm depths were collected at the beginning and at the end of each irrigation period from each pot with the aid of a soil sampler.All samples were air-dried at room temperature for no less than 4 weeks each time.The dried samples were then ground to dust, sifted through 1mm sieves and analyzed for pH, EC, TKN, Olsen-P, and SOM according to methods of soil analysis [22].The particle size analysis of the soil samples was carried out by the Bouyoucos hydrometer method.Measurements of pH and EC were carried out in saturation paste extracts.The Walkley and Black wet digestion method was used for the determination of organic matter (SOM) in soil samples.Available-P was assessed after extraction with NaHCO 3 according to the Olsen method.Total kjeldahl nitrogen (TKN) was assessed by a micro-kjeldahl device.Soil samples for the examination of dehydrogenase activity were immediately stored at 4 °C.The dehydrogenase activities were measured by a slightly modified method of [23] within two weeks after the soil collection, as follows.Fresh soil (2.5 g) was added to a test tube (18 mm × 150 mm) and mixed with 2.5 mL of 1% TTC (2,3,5-triphenyl tetrazolium chloride)-Tris buffer (pH 7.6).The tubes were incubated at 37 °C in the dark for 24 h.The triphenylformazan (TPF) formed by the reduction of TTC was extracted with 50 mL methanol and measured with a spectrophotometer at 485 nm.Methanol was used as a blank. In this study, growth, seed yield, oil content, and biodiesel production were estimated so as to compare different water qualities and crops.Therefore, all plants were cut down every year at the end of each irrigation period following the collection of the seeds.Height was measured from the soil level to the last part of the plant which was the back of the head for sunflower and the last blossom for castor bean.The seeds form each plant were collected every year at the end of the irrigation period or at the time when they were ready for harvesting, the sunflower seeds were cut down before the end of the irrigation period since they reached maturity before castor bean plants.After collection, seeds were air dried for two weeks at room temperature and then weighed.For the calculation of seed yield in kg per ha a spacing of 40 × 40 cm was considered including 62,500 plants per ha which is normal for castor or sunflower plantations [24,25].Samples of seeds were sent to the lab of the Technical University of Crete for oil and biodiesel extraction.Assessment of biodiesel yield was based on the amount of seeds yielded, the quantity of oil that was extracted from the seeds, and the percentage of biodiesel reclamation from the oil.For the oil extraction process 1:6 m/v (mass per volume) seed to solvent ratio was used [26].The synthesis of biodiesel was by the homogeneous base transesterification method using sodium hydroxide as a catalyst. Statistical analysis was performed using SPSS 17.0 program.Analysis of variance (ANOVA) was used for data analysis.Post hoc pair wise comparisons among plant species, water qualities or soil depths were examined by Tukey's honestly significant difference (HSD) test. Impacts on Soil Properties Irrigation with freshwater and effluent tended to increase soil pH in all soil layers over the first and second irrigation period but then it decreased slightly in the upper soil layers towards the end of the experiment.In contrast, soil pH in the deeper soil layer continued to increase during the last period and as a result of this trend a significant effect was detected in that period, with the deepest soil layer showing higher pH values compared to those of the upper ones. Pots irrigated with effluent were found to have slightly higher soil pH compared to those irrigated with freshwater, an effect which appeared at the beginning of the second irrigation period and peaked at the end of that period.Thereafter, the differences in soil pH between effluent and freshwater treated pots decreased with time.Furthermore, plant species had no significant effect on soil pH during the experimental period. With regard to soil ΕC, it increased slightly during irrigation periods and reduced during the winter.Thus, no change was detected at the end of the experiment compared to the initial value, suggesting sufficient salt leaching.Pots irrigated with effluent had higher EC compared to those irrigated with freshwater throughout irrigation periods.Plant species and soil depth had no effect on the EC of soil. SOM varied slightly during the experimental period and no change of the initial value was detected at the end of the experiment.In pots irrigated with freshwater the SOM increased slightly during the irrigation periods but returned to background levels after each winter.With regard to effluent irrigated pots, SOM remained almost constant during the experimental period.As a result of these trends, SOM was marginally higher in pots irrigated with freshwater compared to pots irrigated with effluent at the end of irrigation periods but this effect tended to decline after the winter (Figure 2a).As with water quality, plant species significantly affected SOM, with sunflower showing greater values than castor bean during the second and third irrigation periods (Figure 2a).No differences in SOM were detected among different soil layers.With the exception of the second irrigation period, soil TKN decreased slightly in all pots during the experimental period.It was marginally higher in pots irrigated with freshwater than in effluent irrigated pots (Figure 2b).This effect was evident mainly in pots planted with sunflower and it became stronger towards the end of the second irrigation period (Figure 2b).Soil depth had no significant effect on soil TKN, although plant species with sunflower showed higher TKN compared to castor bean. Those pots irrigated with effluent showed only a slight variation in soil P levels, the final amounts being close background levels (Figure 3a and b).However, soil P followed a different pattern in pots irrigated with freshwater, a significant decrease being detected by the beginning of the second period.Thereafter no significant variation of soil P was observed until the end of the experimental period.As a result of these trends, soil P was higher in the pots irrigated with effluent than in the pots irrigated with fresh water from the beginning of the second irrigation period to the end of the experiment.No differences in soil P were observed among soil layers until the beginning of the second period.Thereafter it varied greatly with soil depth in all pots; the upper soil layer showing higher soil P than the deeper layers.An increase in soil P was observed in the upper soil layer in effluent irrigation pots as indicated in Figure 3b.With regard to plant species no significant differences in soil P were detected (Figure 3a).Dehydrogenase activity in the soil at the end of three consecutive irrigation periods was slightly lower than that measured at the beginning of the experiment.Soil irrigated with freshwater tended to have lower dehydrogenase activity compared to soil irrigated with effluent (Figure 4a), providing information for differences in microbial biomass and activities between pots irrigated with different irrigation regime.Soil depth had a significant effect on dehydrogenase activity with the deepest soil layer showing lower value than the upper soil layers (Figure 4b).As far as the plant species is concerned, no difference in dehydrogenase activity was detected at the end of the experimental period. Plant Height, Seed Yield, and Biodiesel Production Water quality had no effect on the height of sunflower throughout the experimental period.In contrast, castor bean displayed sensitivity to the irrigation regime, with taller plants found in freshwater irrigated pots than in pots irrigated with effluent (Figure 5).Plant species irrigated with freshwater achieved higher seed yield compared to species irrigated with effluent.With regard to differences between plant species, castor bean showed higher seed yield than sunflower (Table 3).The oil produced followed the same pattern as that of seed yield.The highest percentage of reclaimed biodiesel was observed in effluent irrigated castor bean followed by freshwater irrigated castor bean, freshwater irrigated sunflower, and effluent irrigated sunflower (Table 3).As a result of the seed yields obtained and the conversion percentage of oil, the production of biodiesel varied greatly among different treatments, with freshwater irrigated pots showing higher biodiesel yields compared to effluent irrigated pots.In addition, the amount of biodiesel produced by castor bean was greater than that produced by sunflower. Discussion A slight increase in soil pH mainly in the deeper soil layer (20-30 cm) was detected only at the end of the third experimental period.Furthermore, effluent irrigated pots showed higher pH levels than pots irrigated with freshwater.These differences may be explained by variations in the levels of carbonates and bicarbonates of several cations, such as Ca 2+ and Na + , which have probably accumulated in the soil [27].The plant species had no effect on soil pH, this fact together with the minor effects of soil depth and irrigation regime reflect the strong buffering capacity of the fine textured soil in this study. Irrigation with water of relatively high salt content can increase the level of soil EC because of accumulation of salts in the soil.Increased levels of soil salinity may adversely affect soil properties and reduce plant growth due to declining soil osmotic potential and lower water availability [17,18].In this study, an increase in soil EC was observed during irrigation periods, which was followed by a decrease to the background levels after the winter suggesting sufficient salt leaching.Pots irrigated with effluent were observed to have higher soil EC than those irrigated with freshwater due to the higher salt of the effluent.With regard to plant species the absence of differences at the level of soil EC might be expected due to similar application rates.Previous work has shown that the level of soil EC follows the pattern of application rate since plant species can assimilate only a proportion of the available salts [15,16]. SOM at the end of the irrigation periods was marginally greater in pots irrigated with freshwater than in those irrigated with effluent, reflecting the high assimilation potential of soil for organic matter contained in the effluent.This result is in agreement with those reported in a previous study [28] regarding the SOM in the subsoil (0.2-2.0 m), which was attributed to priming effect induced by available substrate in effluent treated pots.Effluent supplies soil with organic C, water, and nutrients which stimulate the growth and activities of microorganisms, which in turn induce decomposition of old SOM in the soil [29].This was also indicated by the higher dehydrogenase activity in the soil of pots irrigated with effluent.In this study, soil depth had no effect on SOM due to the priming effect which probably masked potential accumulation of SOM, particularly in the upper soil layer, in effluent irrigated pots. Plant species significantly affected SOM with pots planted with sunflower showing higher values than those planted with castor bean from the beginning of the second irrigation period to the end of the experiment.This effect was observed in both effluent and freshwater irrigated pots and could be indirect evidence for differences between plant species in root exudates and population and activity of heterotrophs mediating SOM mineralization, since there were similarities between plant species in the quality and loading of the applied effluent or freshwater (see Section 2).Previous work provided evidence that plant species, through root exudates, are able to adjust organic C in the rhizosphere and shape the microbial community which is sensitive to changes in soil C substrate [14].It has been reported a variation of 20% in the C mineralization rate attributed to differences in the affinity between decomposers and available substrate [30].In another study it was found that changes in plant diversity resulted in differences in the abundance, composition, and functions of microbial heterotrophic communities, leading to variations in respiration rates [31].Moreover, plant species, through root exudates, can have a more direct effect on the concentration of SOM.For example, it has been reported that some species exude root enzymes, such as nitroreductase, dehalogenases, and laccases, which may induce the degradation of more refractory organic compounds [32].However, such an effect was not investigated in the present study.Soil TKN did not follow the same pattern with SOM showing also a slight decrease throughout the experimental period.In a previous study [16] with cyclic application of pre-treated effluent in different plant species, a similar pattern for SOM and TKN was observed, suggesting that C and N mineralization is tightly linked.However, in this study, the differences between TKN pattern and that of SOM suggest that the mineralization of organic N in the soil may be explained by some other mechanism different to that of SOM.This suggestion is strengthened by the concentration of ammonium N in the soil which was expected to be relatively low due to the induced nitrification by the favorable environmental conditions [16].Rapid N nitrification rates have been observed n sites under effluent application, which was attributed to the increased C and N availability and application of wet-dry cycles [33,34]. Soil P increased in the upper soil layer in pots irrigated with effluent, which is attributed to the high adsorption capacity of soil.Fine textured soils carry more sites capable of reacting with P and increase the interaction time between soil particles and water [27].In freshwater irrigated pots, soil P dropped significantly during the first irrigation period showing lower values than effluent irrigated pots until the end of experiment.This effect is not consistent with the amount of P applied via effluent irrigation or fertilization, however, in soils with relatively high P content release to soil solution is likely to occur when soil is exposed to water with low P concentration [27,35].No differences in soil P were detected between plant species, reflecting the similar P loading between plant species since only a small fraction of the applied P is recovered from plant biomass [13,15]. Previous work has shown that effluent application increases enzyme activity in the soil reflecting the larger microbial communities present due to increased C and N availability [20,36].In this study, differences in dehydrogenase activity were observed between effluent and freshwater irrigated pots, which is in agreement with the SOM and TKN results, suggesting differences in microbial biomass in the soil between effluent and freshwater irrigated pots.It has been found that in a dairy shed effluent irrigated soil the microbial biomass C and the dehydrogenase activity were elevated until day 30 of the experiment and then declined due to a decrease in soluble organic C [37].Similar microbial biomass and dehydrogenase activity between secondary effluent irrigated soil and soil irrigated with freshwater has been reported suggesting changes only for a minor fraction of the bacterial community [36].With regard to soil depth and its effect on dehydrogenase activity, results showed that it was similar in the upper soil layers but declined slightly at a depth of 30 cm.Lower microbial biomass and activities expected in the deeper soil layer may account for this result [36,37].In this study plant species had no effect on dehydrogenase activity, suggesting that the potential variations between plant species in microbial biomass and communities were not sufficient to differentiate dehydrogenase activity. Pots irrigated with freshwater showed higher seed yield for castor bean and sunflower compared to those irrigated with effluent.The higher level of soil salinity in effluent irrigated pots may account for this effect.In addition, the potential presence of toxic materials, such as heavy metals and aromatic hydrocarbons in the effluent may have also adversely affected seed yield in effluent irrigated pots [38].Plant species significantly affected seed yield in this study with castor bean showing greater yield compared to sunflower.Previous work provides evidence that annual seed yield varies greatly with plant species and genotypes, environmental conditions, and agronomic practices.Typical seed yield reported for castor bean ranges from 900 to 1,200 kg/ha under irrigation with 40-60% oil content [39].However, a study reported annual maximum of 2000 kg/ha with 48% oil for castor bean [40] and this yield is lower than that obtained in this study due to lower irrigation rates at 320 mm/yr.In Greece, seed yields up to 5,000 kg/ha with about 50% oil content for castor bean depending on the plant genotype have been observed [41].These seed yields are in agreement with those found in our study but they were achieved with lower water irrigation rates and N and P additions ranging from 80 to 100 kg/ha and 18 to 48 kg/ha, respectively.However, there seems to be a ceiling in the seed yield of castor bean regardless of the fertilization rates more than 50 kg/ha for N and 30 kg/ha for P [42].With regard to sunflower, its average seed yield ranges from 900 to 1,600 kg/ha of seed with oil content ranging from 18 to 40% [43,44].However, a higher seed yield of 4056 kg/ha and oil yield at 1,841 kg/ha obtained from the treatment with no water stress has been reported [45].These values are similar to those obtained in this study reflecting the favorable environmental conditions in which crops were grown during the experimental period.With regard to biodiesel production, in this study it followed the pattern of the seed yield since it was indirectly proportional to it.Furthermore, no significant differences in the percentage of oil and biodiesel extraction were observed between plant species or irrigation regime. Conclusions The results obtained in this study showed that irrigation with effluent did not result in significant changes in soil pH, SOM, TKN, and dehydrogenase activity whereas soil P was found to increase in the upper soil layer.The soil EC varied slightly throughout the experiment in effluent irrigated pots but no change was detected at the end of the experiment compared to the initial value suggesting sufficient salt leaching.Little differences between effluent and freshwater irrigated pots were observed with regard to soil parameters examined in this study.Thus, pots irrigated with effluent were found to have higher soil EC, P, and dehydrogenase activity compared to those irrigated with freshwater.In contrast, pots irrigated with freshwater had slightly higher SOM and TKN content.This result suggests that the effluent, through C and N additions, probably stimulated the growth of soil microorganisms which in turn induced mineralization of organic matter in the soil, as was also indicated by the higher dehydrogenase activity in effluent irrigated pots.With regard to plant species, only minor effects on soil properties were observed with sunflower showing greater values of SOM and TKN than castor bean.This effect provides evidence of differences between plant species with regard to microbial biomass, communities, and activities related to C and N mineralization in the soil.With regard to seed yield, it was higher in freshwater irrigated pots compared to that in pots irrigated with effluent probably due to the lower level of soil salinity.Plant species significantly affected the seed yield in this study with castor bean showing greater yield compared to sunflower.Biodiesel production followed the pattern of seed yield. In conclusion, the findings of this study suggest that wastewater effluent can constitute an important source of irrigation water and nutrients for bioenergy crop cultivations with minor adverse impacts on soil properties and seed yield.Plant species play an important role with regard to the changes in soil properties and to the related factors of seed and biodiesel yields.Further research is needed in order to elucidate the effect of plant species on microbial biomass and communities which mediate C and N turnover in soil and nitrogen assimilation in plant biomass, affecting N cycling and losses (as nitrates or NO x gases) to the environment.Knowledge provided would help in successful selection of bioenergy crops and management practices ensuring sustainable yield with minor adverse environmental impacts arising from the use of the effluents. Figure 1 . Figure 1.Temperature and rainfall during the experimental period. Figure 2 . Figure 2. Average values of (a) soil organic matter (SOM); and (b) total kjeldahl nitrogen (TKN) as a function of water quality and plant species, during the three irrigation periods (2007-2009). Figure 3 . Figure 3. Average values of soil available P as a function of (a) water quality and plant species; and (b) water quality and soil depth, during the three irrigation periods (2007-2009). Figure 4 . Figure 4. Dehydrogenase activity as a function of (a) water quality; and (b) soil depth, at the beginning and end of the experimental period (November 2009). Figure 5 . Figure 5. Plant height of different plant species at the end of the three irrigation periods. Table 1 . Average values of chemical parameters in freshwater and effluent samples. Table 2 . Hydraulic, N, P, and COD loads applied to freshwater and effluent irrigated pots. Table 3 . Average values of seed, oil and biodiesel yields for different crops and irrigation regimes.
2016-04-23T08:45:58.166Z
2011-11-24T00:00:00.000
{ "year": 2011, "sha1": "b35f3c8f6698bbbbcc6ee51205e25caeceeee8c1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/3/4/1112/pdf?version=1433834519", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b35f3c8f6698bbbbcc6ee51205e25caeceeee8c1", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
225047144
pes2o/s2orc
v3-fos-license
Novel genomic targets of valosin-containing protein in protecting pathological cardiac hypertrophy Pressure overload-induced cardiac hypertrophy, such as that caused by hypertension, is a key risk factor for heart failure. However, the underlying molecular mechanisms remain largely unknown. We previously reported that the valosin-containing protein (VCP), an ATPase-associated protein newly identified in the heart, acts as a significant mediator of cardiac protection against pressure overload-induced pathological cardiac hypertrophy. Still, the underlying molecular basis for the protection is unclear. This study used a cardiac-specific VCP transgenic mouse model to understand the transcriptomic alterations induced by VCP under the cardiac stress caused by pressure overload. Using RNA sequencing and comprehensive bioinformatic analysis, we found that overexpression of the VCP in the heart was able to normalize the pressure overload-stimulated hypertrophic signals by activating G protein-coupled receptors, particularly, the olfactory receptor family, and inhibiting the transcription factor controlling cell proliferation and differentiation. Moreover, VCP overexpression restored pro-survival signaling through regulating alternative splicing alterations of mitochondrial genes. Together, our study revealed a novel molecular regulation mediated by VCP under pressure overload that may bring new insight into the mechanisms involved in protecting against hypertensive heart failure. . VCP-mediated transcriptomic alterations are different between the sham and 2 weeks (2W) TAC conditions. (a) The volcano plot of DEGs between VCP TG and WT mice at the sham and 2W TAC conditions. Red and green dots represent up-and down-regulated in the VCP TG group, respectively. The red dash line represents the threshold of FDR. The dot above the line are genes with FDR < 0.05. n = 3-4/group. (b), (c) GO functional analysis of DEGs induced by VCP between the sham and TAC conditions. GO functions of DEGs based on the FC > Log2 and p < 0.05 by comparing VCP TG with WT mice at the condition of sham (b) and 2W TAC (c), in terms of the cellular components (CC, green), molecular function (MF, red) and biological process (BP, Blue). Scientific Reports | (2020) 10:18098 | https://doi.org/10.1038/s41598-020-75128-z www.nature.com/scientificreports/ the sham and TAC conditions as both were predominately involved in membrane proteins. However, the GO analysis based on the MF and BP showed a remarkable difference between the sham and TAC conditions. The VCP induced DEGs at the sham condition were related to the ion binding, transfer, and activity as well as protein binding (Fig. 1b), while the DEGs induced by VCP under the TAC condition were involved the G protein-coupled receptors (GPCRs) and their signaling transductions (Fig. 1c). These data indicate that the gene regulations of the VCP are different between the sham and under the stress conditions. We further compared the top DEGs between the VCP TG and WT mice detected at both sham and TAC conditions. As shown in Table S1, based on FC, the top DEGs were remarkably different between two states, and the DEGs with the highest FCs were further validated by qRT-PCR (Fig. S2). We also compared the alterations of those most significant DEGs detected in VCP TG mice at the sham condition with their corresponding alterations at the 2W TAC condition. As shown in Table S2, based on FDR < 0.05, among the 45 most significant DEGs induced by VCP at the sham condition, only 7 of them were detected at the TAC groups (highlighted with the bold). These included three genes related to DNA, metal and protein binding (Hfm1, Cntnap2, Zfp616), one gene involved in transmembrane protein interaction and receptor transport (Mamdc4), and three uncharacterized genes (Fam11b, BC049715, Fsip2). These data together demonstrated that VCP elicits a distinct transcriptomic alteration upon the pressure overload, indicating a stress-specific regulatory role of VCP in the heart. We also examined the corresponding alteration of these seven overlapping DEGs in WT mice under the TAC stress. As shown in Table S2, the modifications of these seven DEGs in WT between 2W TAC vs sham control were opposite to those detected in VCP TG sham mice vs WT sham. Additionally, to determine the potential association between the VCP-mediated gene regulation and the inhibition of cardiac hypertrophy, we compared the detected DEGs during the development of cardiac hypertrophy secondary to the 2W TAC with those DEGs induced by the VCP overexpression at the sham condition. Based on FDR < 0.05 and FC > 2, 22 overlapping significant DEGs were found between the comparison of WT 2W TAC vs. WT sham and the comparison of VCP TG sham vs WT sham (Table 1). Interestingly, the alterations of these overlapping DEGs in VCP TG mouse hearts showed an opposite manner to those in 2W TAC WT mice when both were compared to WT sham. As shown in Table 1, there were ten DEGs upregulated by 2W TAC in WT mice, but they were downregulated in VCPTG vs WT sham mouse hearts. These DEGs included the genes involved in transmembrane protein interaction and receptor transport (such as Adam8, Sh3tc1, Mamdc4, Lrrc47, Spns2, Plscr2); and the genes related to the regulation of fetal development and cell cycle and fate (such as Sox17, Rgcc, Tacc2, and Fas). In contrast, 12 DEGs were downregulated in WT after 2W TAC, but they were found to be upregulated in VCP TG mouse heart vs sham. These DEGs included the genes related to the DNA, metal, and protein binding (such as Erbb4, Hfm1, Dcdc5, Sytl2, Cntnap2, Zfp616), the oxygen transport (Hbb-bt), and other genes related to cell development and motility, and organization of microtubules (such as Upk1b, Fsip2, Ccdc40, VCP induces specific DEGs involving GPCRs in response to the pressure overload in the mouse hearts. Considering the regulation of VCP is stress-associated, we further explored the specific gene regulation in response to pressure overload by examining the DEGs between the 2W TAC mice and their corresponding sham controls in either VCP TG mice or WT mice. The comparison was performed to explore the different responses to TAC between two groups. As shown in Fig. 2a, b, the majority of GO "function" of DEGs were similar between the VCP TG and WT mice within the comparison of the 2W TAC with sham. However, there was a distinct GO "component" in sensory perception of smell in the VCP TG mice in response to 2W TAC (Fig. 2b). In addition, as shown in Table 2, based on FDR < 0.05, the top 20 DEGs were upregulated genes in 2W TAC VCP TG mice compared with their sham controls, while most of the top DEGs were downregulated genes in 2W TAC vs sham in the WT. Consistent with the results of the GO analysis, among these top 20 ranked DEGs in VCP TG mice, more than half (13 out of 20) of the genes belonged to the olfactory receptor family (Olfr787, Olfr193, Olfr1311, Olfr1299, Olfr1303, Olfr998, Olfr1231, Olfr1448, Olfr73, Olfr507, Olfr782, Olfr498, Olfr1023), plus two vomeronasal one receptors (V1rd19 and Vmn1r183) ( Table 2). Interestingly, we found that a few top Olfr DEGs presented in Table 2 were overlapped between VCP TG and WT mice, but showing a different or an opposite change, as using FC more than 2 and p-value less than 0.05 as a threshold. As shown in Fig. 3a, when comparing the 2W TAC with their sham controls, Olfr1097 and Olfr181 were found to be the top significant Olfr DEGs that were downregulated in WT mice, but they were upregulated in VCP TG mice (log2FC − 5.4 in WT vs log2FC 3.8 in VCP for Olfr1097; and log2FC − 4.9 in WT vs log2FC 2.5 in VCP TG for Olfr181,respectively). In addition, Olfr1373 was upregulated in WT, but was not detected in VCP TG mice (Fig. 3a). In contrast, there were several top Olfr DEGs showing significant upregulation in VCP TG mice, but were downregulated in the WT mice, including Olfr193 (log2FC 6.0 in VCP TG vs log2FC − 3.97 in WT), Olfr1311 (log2FC 5.86 in VCP TG vs log2FC − 2.66 in WT) and Olfr1299 (log2FC 5.83 in VCP TG vs log2FC − 3.53 in WT). The different or opposite regulations between VCP TG and WT mice on these top Olfr DEGs identified by RNA-seq were further validated by qRT-PCR (Fig. 3b, c). In addition, we also noticed a few Ingenuity pathway analysis (IPA) IPA predicts VCP to inhibit TAC-induced hypertrophic upstream transcription factor. To further identify the upstream regulators of the DEGs in VCP TG mice, www.nature.com/scientificreports/ we conducted an IPA to determine the top transcription factors that were associated with the most of significant DEGs detected in both VCP TG and WT groups by using p-value was less than 0.05 and FC was greater than 2 as the threshold. Our IPA analysis identified cAMP-responsive element-binding protein 1 (CREB1), a phosphorylation-dependent transcription factor, was found to be one of the top transcription factors associated with the significant DEGs in both VCP TG and WT groups, when the TAC mice were compared to their corresponding sham mice. Notably, this transcription factor appeared an opposite regulation on the downstream genes between VCP TG and WT mice. As shown in Fig. 4a, b, compared to the sham controls, CREB1 was predicted to be activated in the WT mice under the treatment of 2W TAC (Fig. 4a), but to be inhibited in 2W TAC VCP TG mice (Fig. 4b). VCP is predicted to modulate the alternative splicing of mitochondrial proteins under the TAC stress. To determine whether VCP regulated differential transcript splicing in response to the cardiac stress, we conducted an alternative splicing analysis based on the differentially expressed transcripts (DETXs) between VCP TG and WT mice at both sham and 2W TAC conditions by using a recently developed count-based statistical model, LeafCutter 11 . The alternatively excised intron clusters were identified by the LeafCutter model and intron usage as counts or proportions were summarized. With this analysis, 18 DETXs were predicted in the sham groups, and 39 DETXs were predicted in TAC groups between VCP TG and WT mice (Table S3). Among these DETXs, two DETXs, e.g., Sorbs1 (Sorbin and SH3 domain-containing protein 1), a gene regulating cell adhesion and cytoskeletal formation, and Ttn, a cellular structure gene, were detected in VCP TG vs WT at both the sham and 2W TAC conditions (Table S3). These two DETXs were also found in the WT 2W TAC mice when they were compared to WT sham controls (Table S3). In addition, among the DETXs between VCP TG and WT under 2W TAC conditions, two DETXs belonging to the subunits of NADH:ubiquinone oxidoreductase (complex I), e.g., NADH dehydrogenase ubiquinone flavoprotein 3 (Ndufv3) and NADH dehydrogenase iron-sulfur protein 6 (Ndufs6) were detected (Table S3). We further used LeafViz to visualize the significant splicing events for these two DETXs in each group 11 . The splicing events of DETXs detected in WT and VCP TG at 2W TAC were identified based on differential usage of a mutually exclusive exon. Differential splicing was measured by a change in the percentage of spliced in dPSI using FDR < 0.05. In Fig. 5a, b, the splicing events of Ndufv3 and Ndufs6 displayed different profiles in the alternative intron-excision options in two groups by different dPSI (Fig. 5a, b). qRT-PCR was also used to validate the alterations of alternative splicing for each gene by measuring the relative expression level of the corresponding splicing variants using two primer sets 12 . In Fig. 5c, d, the qPCR results showed different ratios of expression for the splicing variants between WT and VCP TG mice at 2W TAC, which supported the prediction of LeafCutter analyses (Fig. 5a, b). Discussion Our previous results indicated VCP was a promising therapeutic candidate. It has been demonstrated in the TG mice that an increase of VCP prevents the stress-induced pathological cardiac deterioration with fewer side effects on normal unstressed hearts 10 . The current study further investigated the molecular mechanisms by which VCP protects the heart against pressure overload-induced cardiac hypertrophy. Our results from this study revealed Our results from the GO analysis revealed VCP-induced DEGs at the sham condition were related to DNA, ion and protein binding, and protein transfer and activity. We found a few DEGs related to DNA, metal, and protein binding was upregulated in VCP TG vs WT at both sham and 2WTAC conditions, included Hfm1, Cntnap2, Zfp616. Simultaneously, Mamdc4, a gene involved in receptor transport, was downregulated in VCP TG at sham condition, but increased under TAC condition. It was notable that these DEGs showed an opposite alteration between the 2W TAC WT and VCP TG sham as they were compared to the WT sham mice. These results shed some light in understanding the protective effect of the VCP against pressure-overload-induced hypertrophy; however, the role of these genes remains largely unknown. Our results also showed that overexpression of the VCP could induce a gene regulation that would resist the molecular alterations induced by 2W TAC in WT. For example, while 2W TAC in WT mice induced an upregulation on the genes involved in transmembrane protein interaction and receptor transport as well as the regulation of fetal development and cell cycle /fate (Adam8, Sh3tc1, Mamdc4, Lrrc47, Spns2, Plscr2, Sox17, Rgcc, Tacc2, and Fas), these genes were downregulated by the overexpression of VCP. In contrast, an overexpression of the VCP could upregulate the genes related to DNA and protein binding and cell motility (Erbb4, Hfm1, Dcdc5, Sytl2, Cntnap2, Zfp616), while 2W TAC downregulated these genes in WT mice. These data indicate that the overexpression of the VCP may play a protective role by preemptively regulating the relative gene expression induced by TAC, thus preventing the TAC-induced pathological signaling. Although these genes' exact role and regulatory mechanism remain largely unknown, evidence has indicated some potential effects of these genes on heart. For example, studies have shown that the downregulation of Erb4 was associated with chronic cardiac hypertrophy secondary to aortic stenosis, which played a role in the transition from compensatory hypertrophy to failure 13 . It was also found that an elevated expression of ADAM8 was associated with vascular diseases in mice and humans 14 and it acts as a potential surrogate of inflammation, which has been associated with myocardial infarction 15 . These studies support our findings and stimulate future investigation to exploring the molecular mechanism underlying the cardiac protection conferred by VCP. Notably, the analysis of the transcriptomic profiles showed that VCP-induced gene regulation under stress in 2W TAC mice was dramatically different from those observed in the sham groups when VCP TG mice were compared to WT mice. These results indicate that VCP acts as a stress-associated protein and plays a protective role, specifically in the stressed hearts that are distinct from the normal unstressed hearts. These data support Figure 4. Transcription factor CREB1 exhibits an opposite regulation in the downstream genes between WT and VCP TG mice in response to 2W TAC. Based on the comparison between 2W TAC and sham controls, in WT and VCP TG mice, respectively, an Ingenuity Pathway Analysis (IPA) (Qiagen's ) of the RNA-seq data identified CREB1 as a top transcription factor in both WT and VCP TG mice but showing an opposite regulation between two groups (Data were analyzed through the use of IPA (QIAGEN Inc., https ://www.qiage nbioi nform atics .com/produ cts/ingen uityp athwa y-analy sis)). CREB1 mediated signaling was predicted to be activated in WT (a) but inhibited in VCP TG mice (b). Scientific Reports | (2020) 10:18098 | https://doi.org/10.1038/s41598-020-75128-z www.nature.com/scientificreports/ our previous phonotypic findings of an overexpression of VCP protected the heart against pressure overloadinduced pathological hypertrophy, but it did not affect cardiac growth in normal hearts 10 . Given the strong link between the pressure overload-induced cardiac hypertrophy and heart failure, we focused on the cardiac transcriptomic characterization of VCP in response to 2W TAC. One of our novel findings was that, among the top regulated DEGs, a group of olfactory receptors (ORs) and vomeronasal receptors (VRs) were found to be significantly upregulated in the VCP TG mice at 2W TAC when compared to their shams. Interestingly, some of these ORs were downregulated in WT in response to the 2W TAC. The opposite regulation on these genes between the VCP TG and WT mice indicates that these receptors may play an essential role in the VCP-mediated cardiac protection against the pressure-overload stress. We also noticed that some DEGs were only specifically regulated in WT mice, indicating that multiple signaling pathways may be involved in the pressure-overload induced alterations of the transcripts in WT. Although ORs and VRs have been reported to be located in the heart muscle cells 16,17 , as known odor receptors in the neurons, these genes' roles in the heart has not been recognized. It has been shown in other tissues that ORs were involved in the activation of the olfactorytype G protein, which in turn activated the lyase-adenylate cyclase that converts ATP into cyclic AMP (cAMP) 18 , participating in the transfer of calcium and sodium ions into the cell 19 . Other ORs located in the immune system has been linked to the death of some types of leukemia cells 20 . Although both ORs and VRs are GPCRs, these receptors are distantly related to the primary olfactory system's receptors, highlighting their different roles 18,21 . Our results showed for the first time the link between VCP and these ORs and VRs, which suggests a potential protective role of these genes in preventing pressure overload-induced cardiac hypertrophy. These data open a new research direction to explore the role of these receptors in cardiac protection. Although our previous study has shown a significant increase of Nppa and Nppb in the WT TAC vs sham by qPCR, which was accompanied by multiple corresponding physiological and histological alterations, validated the TAC model 10 , this significant increase was not detected by RNA-seq. Despite the multiple potential reasons, one of the failed detections may be due to the very low expression level of these fetal genes in the normal adult ventricles, which may not reach the expression level to be detected reliable by RNA-seq at our sequencing depth. We also would like note that, for some scarcely expressed genes, it is known that sometimes there is an inconsistency between the RNA-seq and qPCR results, since qPCR is more sensitive. In addition, our qPCR results showed that Nppa is relatively increased in VCP TG vs WT sham mice. Although Nppa was detected to be dramatically increased in the pathological cardiac hypertrophy as reported from our and others' previous studies, an increased ventricular expression of Nppa was not necessarily correlated with cardiac hypertrophy, particularly with small amount of increase. It has been shown that hypertrophy can occur in the absence of increased ventricular Nppa, and increased levels of Nppa can also occur in the absence of detectable cardiac hypertrophy 22 , indicating that the cardiac hypertrophy is the result of a multifactorial process. On the other hand, transgenic overexpression of Nppa tended to protect against hypertrophic stimuli 23 . Since the regulatory mechanisms underlying the Nppa www.nature.com/scientificreports/ expression in the adult ventricles and the subsequent reactivation in the diseased heart in vivo have not been resolved satisfactorily, the role of VCP in these processes need further investigation. Another novel finding from this study was identifying the upstream transcription factors associated with VCP-mediated protection. The IPA analysis identified CREB1 as one of the top transcriptional factors that were regulated in both the VCP TG mice and WT mice in response to the 2W TAC, but oppositely. As CREB1 was predicted to be activated in the WT mice, it was predicted to be inhibited in the VCP TG mice. It is known that CREB1 is a member of the leucine zipper family of DNA binding proteins and it can be phosphorylated by several protein kinases to induce the transcription of genes in response to hormonal stimulation of the cAMP pathway 24 . Considerable evidence indicates that CREB1 is involved in cardiac hypertrophy upon stimulation 25,26 . Our data supported a strong association between the activation of CREB1 signaling and the pressure overload-induced cardiac hypertrophy in the WT mice. Several studies indicated a direct link between the activation of CREB and GPRCs, which involves in a highly conserved cAMP/PKA/CREB pathway. It has been shown that GPCRs binding with their ligands would lead to the dissociation of the heterotrimeric G protein complex, which subsequently activates or inhibits the transmembrane adenylyl cyclase molecules. The activated cyclase increases cAMP synthesis, which binds to the regulatory subunit of protein kinase A (PKA-R), leading to a dissociation of the catalytic subunits (PKA-C) from tetramers consisting of regulatory and catalytic subunits. Free PKA-C then phosphorylates the substrate proteins, including transcription factor CREB. Phosphorylation of CREB is required for interaction with the CREB-binding protein (CBP) co-activator. The activation of transcription of the genes contain cAMP response element (CRE) sites in their promoters [27][28][29] . Olfactory receptors (ORs) belong to the GPCRs family and they have been shown to be involved in the regulation of CREB-mediated gene expression 30 . Our data showed that VCP-mediated regulation is highly associated with both, the ORs genes and CREB-mediated signaling, implying a potential link between ORs and CREB activity; however, the precise mechanisms underlaid will need some further studies. Our previous study has shown that VCP attenuates pathological cardiac hypertrophy by selective inhibiting pressure-overload induced mTORC1/AKT/pS6 signaling 10 . However, the exact molecular mechanisms underlying this regulation remained mostly unknown. The results from this study brought some new insights into the understanding of VCP's regulation on this signaling. First, the most recent studies showed that GPCR signaling inhibited mTORC1 31 . Our RNA-seq data indicated that VCP upregulated a group of GPCR, particularly ORs, which suggested that the inhibitory effect of VCP on mTORC1 under stress could be mediated by the activation of these ORs, thus, providing a potential explanation for our previous findings. Secondly, several studies have indicated a connection between Akt phosphorylation at 308 and the activation of CREB in other tissues, such as neurons and 293 T cells 32,33 . These studies showed that Akt phosphorylation at 308 (pAkt-308) interacted and co-located with CREB, required for the CREB phosphorylation. These results imply an association between VCP-mediated downregulation of pAKT-308 detected in our previous study and the inhibitive effect of CREBmediated genes identified in the current study. This further indicates that VCP acts as a new inhibitor for CREB1 signaling by which the VCP prevents the pressure overload-induced activation of this signaling. Finally, our results also revealed a novel role of VCP in regulating RNA splicing alterations under the stress of pressure overload. The excision of introns from pre-mRNA is an essential step in mRNA processing. We used a newly developed statistical model, LeafCutter 11 , to detect the potential alternative splicing that may be linked to VCP. This analysis showed that it accurately identified the robust variation in intron excision across conditions with a count-based statistical modeling 11 . Our results showed that two important DETXs associated with the first enzyme complex (Complex I) in the electron transport chain of mitochondria, e.g., Ndufs6 and Ndufv3, were found to be altered in the VCP TG vs. WT mice at 2W TAC. These data strongly supported our previous findings that VCP increased the complex I dependent mitochondrial respiration in the heart 9 . Particularly, DETXs of Ndufv3 and Ndufs6 were only detected in the VCP TAC mouse hearts, but not in WT TAC hearts, indicating a potential specific effect of VCP response to the 2W TAC. We are particularly interested in Ndufv3 and Ndufs6 for the following reasons: first, our previous studies showed that VCP acted as a stress-associated protein that played a protective role in the stressed hearts that are distinct from the normal unstressed hearts. Our current analysis of transcriptomic profiles also showed that VCP-induced gene regulation under stress in the 2W TAC mice was dramatically different from those observed in the sham control mice when VCP TG mice were compared to WT mice. These two DETXs were detected in the 2W TAC in VCP TG, but not in the sham groups when compared to WT. Second, these two DETXs are associated with the complex I of mitochondria, which strongly supported our previous finding that the VCP increased the complex I dependent mitochondrial respiration in the heart. Third, much less alternative splicing sites were detected in these two genes, making it relatively easier for us in the future with more functional validations to identify the possible alternative splicing of the isoforms that are specifically responsive to VCP. In addition, we found that DETXs that were involved in cell metabolism and energy generation were regulated by VCP, such as, Sorbs1, a significant regulator of insulin-stimulated signaling and regulation of glucose uptake 34 . As it has been reported that Sorbs1 is involved in a second signaling pathway required for insulin-stimulated glucose transport 34 , our results may imply a potential link between VCP and cardiac energy metabolism. However, considering that Sorbs1 has multiple start sites and alternately spliced isoforms, which dramatically increase the complexity of regulation and the difficulty in identifying the role of each specific isoform, further investigations are needed to determine the particular regulating effects conferred by VCP. Furthermore, we found that VCP was also involved in regulating the splicing alteration of genes participating in the structure and contractility of the heart muscle, such as Ttn. Interestingly, mutations in this gene are associated with familial hypertrophic cardiomyopathy 35 . As a powerful approach for studying variation in alternative splicing, this analysis allows the identification and quantification of both known and novel alternative splicing events, which may bring new insights into the regulation of VCP. However, it should be noted that the results from such tests are only predictive, and any Scientific Reports | (2020) 10:18098 | https://doi.org/10.1038/s41598-020-75128-z www.nature.com/scientificreports/ identified specific splicing variants should be subject to further validation by functional studies. It is particularly applicable for those genes with multiple start sites with many alternate spliced isoforms, such as Sorbs1 and Ttn, for which there may be difficulty in identifying the alteration and the role of each specific isoform. In summary, as showed in Fig. 6, our data revealed potential new transcriptomic networks underlying the cardiac protection conferred by VCP, which involved the regulations at multiple levels including the specific hypertrophic transcription factor and genes and the alternative splicing of mitochondrial genes, inhibiting the hypertrophy signaling and promoting mitochondrial function in the stressed hearts. Methods Study design. To explore the molecular basis of the cardiac protection conferred by the VCP, a cardiacspecific VCP TG mouse model was generated and compared to its litter-mated WT mice. To mimic the pressure overload-induced cardiac stress, both WT and VCP TG mice were subjected to a TAC surgery for 2 weeks as described previously 10 and a group of mice with the sham operation was the control. Physiological and histological studies confirmed the success of the models. The LV tissues collected from these mice and mRNAs were extracted and used for RNA sequencing. The quality assessment was performed based on external RNA spikein controls. Comparisons were first performed between VCP TG and WT mice at two treatment conditions: sham and 2W post-TAC to determine the different regulation of the cardiac transcriptomics by VCP between the stressed and unstressed conditions, and then performed between sham and TAC in WT and VCP TG mice, respectively, to characterize the genomic regulations involved in cardiac hypertrophy induced in WT mice and the protective regulations in VCP TG mice against the pressure overload-induced cardiac stress. The detailed methods were described as follows. Animals and heart tissue samples. A cardiac-specific VCP TG mouse model was generated as described previously 10 , in which the cardiomyocytes showed an increase of VCP by 3.5 folds compared with WT mice 10 . There are no significant differences between the VCP TG and WT mice at the baseline condition at three to six months old. No difference was found between male and female mice in both the WT and VCP TG mice at this age. VCP TG mice and their litter-matched FVB WT male and female mice at three to five months old were randomly assigned into two experimental groups, sham or TAC for 2 weeks. TAC was performed as previously described 10,36 . The sham-operated mice underwent the same procedure except for constriction of the aorta. Cardiac hypertrophy and function were confirmed by echography and histology. The heart tissues were collected after ex vivo measurements. All animal procedures were performed under the NIH guidance (Guide for the Care and Use of Laboratory Animals, revised 2011), and the Institutional Animal Care and Use Committee of Loma Linda University approved the protocols. RNA extraction and RNA-seq. Total RNA was extracted from left ventricular (LV) tissues using the Qiagen miRNeasy kit. NuGen Ovation Mouse RNA-Seq kit was used to construct RNA-seq libraries with 1% of Figure 6. Summary of the gene network regulated by VCP upon pressure overload. The regulatory mechanism underlying the cardiac protection of VCP involved the potential effects at multiple levels including the upstream transcription factors, gene expression and the alternative splicing, which constitute an integrative gene network, inhibiting the hypertrophy signaling and promoting mitochondrial function in the stressed hearts. Bioinformatic analysis of RNA-seq data. • RNA sequencing and quality control (QC) The next-generation sequencing (NGS) was used to determine the transcriptomic gene expression alteration modulated by VCP using RNA-seq. The raw fastq data were assessed by FastQC (v0.11.4) and Bioconductor package ShortRead for quality control. The trimming process was performed by trimmomatic v0.35 38 In the gene-level analysis, the trimmed fastq data were aligned to the reference genome and quantified by Kallisto v0.43.1 with default parameters 39 . In Kallisto, isoform expression for each gene was summed to derive the counts and transcript per million (TPM) values by Bioconductor package tximport. The analysis of differentially expressed genes (DEG) was performed with DESeq2 40 . Genes < 10 counts were discarded for the DEG analysis. The DEGs were defined as the false discovery rate (FDR) < 0.05 or FC > 2 with p < 0.05. In the transcript-level analysis, the trimmed fastq data were aligned by the 2-pass mode of STAR v2.5.4b 41 with default parameters. The analysis of DETXs was performed by LeafCutter 11 with a default parameter setting. The splicing variants were normalized by proportion for differential analysis. Briefly, the leafcutter was applied to the spliced reads to quantify differential intron usage across samples. For details, it first extracted the junction reads from bam files to identify the alternatively excised introns and summarized the intron usage as proportions across samples of two groups. Finally, a differential analysis of intron usage proportions between the two groups' samples was performed using a Dirichlet-multinomial model. A splicing event was labeled significantly if the FDR < 0.05 and LeafViz was used to visualize the significant splicing events. Briefly, a statistical test, usually the hypergeometric, χ 2 , binomial, or Fisher's exact test, is used to compute p values, which are subsequently adjusted for multiple testing. The result of this analysis is a list of single biological annotations from a given ontology with their corresponding p values. Those terms with p values indicating statistical significance are representative of the analyzed list of genes and can provide information about the underlying biologic processes. Graphs were generated by GraphPad Prism8 based on the GeneCodis data. • Pathway analysis The assessment of biological and interaction networks of candidate genes at 2W TAC within WT and VCP TG were generated through the use of IPA (QIAGEN Inc., https ://www.qiage nbioi nform atics .com/produ cts/ingen uity-pathw ay-analy sis). The candidate genes were uploaded into the IPA for the identification of their biological function and the functional networks of the eligible molecules. Real-time quantitative RT-PCR (q-PCR). qPCR was used to validate some selective DEGs. cDNA was synthesized from RNA of each sample using the Transcriptor First Strand cDNA Synthesis Kit (Roche). qRT-PCR was performed on a CFX96 Touch Real-Time PCR Detection System by using iTaq Universal SYBR Green Supermix (Bio-Rad) according to the manufacturer's instructions 8, 10, 21 . Each sample was performed in triplicate Table 3. The sequences of the primers used for the qPCR for this study.
2020-10-24T05:06:15.912Z
2020-10-22T00:00:00.000
{ "year": 2020, "sha1": "325ab0fc020148ed77d4a3767a6d1245e6756a80", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-75128-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "325ab0fc020148ed77d4a3767a6d1245e6756a80", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
236460319
pes2o/s2orc
v3-fos-license
Ultra-Reliable and Low-Latency Computing in the Edge with Kubernetes Novel applications will require extending traditional cloud computing infrastructure with compute resources deployed close to the end user. Edge and fog computing tightly integrated with carrier networks can fulfill this demand. The emphasis is on integration: the rigorous delay constraints, ensuring reliability on the distributed, remote compute nodes, and the sheer scale of the system altogether call for a powerful resource provisioning platform that offers the applications the best of the underlying infrastructure. We therefore propose Kubernetes-edge-scheduler that provides high reliability for applications in the edge, while provisioning less than 10% of resources for this purpose, and at the same time, it guarantees compliance with the latency requirements that end users expect. We present a novel topology clustering method that considers application latency requirements, and enables scheduling applications even on a worldwide scale of edge clusters. We demonstrate that in a potential use case, a distributed stream analytics application, our orchestration system can reduce the job completion time to 40% of the baseline provided by the default Kubernetes scheduler. Introduction Future applications, e.g., extended reality applications or 5G and beyond telco services, will require ultrareliability and low-latency communication from the hosting compute and network infrastructure. Hence we are in the process of extending the traditional cloud with an emerging architecture. Edge computing, built on the fast access network of 5G, is capable of fulfilling such strict delay criteria. Remote edge nodes are prone to failures and their downtime might be longer than that of a central infrastructure, i.e., data centers. Therefore, while the edge deployment of compute elements of the service minimizes service delay, ensuring the high reliability of services is a challenge. The evolution and the growing interest of virtualization technologies led to the appearance of centralized data centers that host cloud-native applications and have advanced infrastructure managers to ensure the seamless operation of those applications. Kubernetes has become the most popular cluster manager during the past 5 years: it is used primarily for orchestrating data center deployments running web applications. Its powerful features, e.g., self-healing and scaling, have attracted a huge community, which in turn, is inducing a meteoric rise of this open source project. We venture to reshape Kubernetes' "heartand-soul", the scheduler, to be suited for an edge infrastructure and for delay-sensitive applications to be deployed on the edge of a global network. As the edge infrastructure is highly prone to failures, and is considered to be expensive to build and maintain, selfhealing features must receive more emphasis than in the baseline Kubernetes, therefore a topology-aware system is needed that extends its widely-used feature set with regard to network latency. Today's services strive for worldwide availability and geographic reach might be even more crucial in the future. In order for a system to meet all the requirements, e.g., low latency and high availability practically everywhere, it should have tens of thousands of computing nodes geographically distributed and fully connected to serve all the clients. Generally, we can state that managing such a huge infrastructure is far from trivial, and exacerbated by the geographical spread. Answering simple questions, like, how do we measure network characteristics effectively, how can we react to topology changes, do become more and more difficult. Besides availability, reaching high reliability is also challenging: service providers must ensure that they can respond to different failures, so their users will not be affected by a service outage for a long time. In this article we give potential answers to the above challenges and provide a conceptual solution for the fulfillment of strict time criteria of future applications. We propose our advanced edge-scheduler that takes into account the underlying network latency, and the applications' latency requirement in the scheduling decisions about the application components. Our backup resource multiplexing technique provides high reliability for the applications with aware of latency requirements by carefully provisioning resources for this aim. The system works on large scale with huge number of worker nodes and service requests thanks to our dynamic clustering method that can also organize a federated system dynamically. As a proof of concept, we implement the edge-scheduler in Kubernetes, and evaluate its performance in selected scenarios. The paper is organized as follows. In Section 2 we introduce our model for ensuring high reliability and ultra-low latency with economical edge resource provisioning. In Section 3 we show our proposed scheduler solution that is based on an advanced heuristic scheduling algorithm, which dynamically handles incoming events of a geographically widespread virtual infrastructure, supporting several latency critical 5G applications. In Section 4 we present the operation of our re-scheduler that further decreases the provisioned resources in our system periodically, in an offline orchestration operation. We address scalability and present our edge node clustering solution based on network delay in Section 5. In Section 6 we describe the implementation choices we made and we present our experiment results in Section 7. We discuss related prior art in Section 8, and we conclude the paper in Section 9. Scalable and Economical Edge Scheduling for Latency-Critical and Operation-Critical Applications Our proposed concept turns a virtual infrastructure scheduler into a manager of geographically widespread infrastructure. It is built on two advanced scheduling algorithms that support latency critical applications. In this section we introduce our proposition for reserving backup resources, and we build a model for describing the problem of minimizing the amount of those while conveniently providing reliability for delay-critical applications on top of them. In the architecture of the system we call the physical entities with computational resources, processor, memory, network bandwidth, as nodes. In this sense, a node can represent a single server at the edge of the network, or an abstraction of an entire cloud data center. Since our system considers network latency in every aspect of application deployment, we use a delay matrix: the values in the delay matrix represent the smallest delay value between each node pair. The deployable units of application components are called as Pods. The users can define delay criterion for each latency critical Pod which gives the maximum network latency that is tolerated by the application from an arbitrary point defined by the application provider, which we call origin. radius. The origin of a Pod can be defined either as one of the nodes (that is close to the location of the users that the Pod will serve), or as another Pod deployed previously in the system (with which affinity is required for the Pod currently being deployed). To provide high reliability for the applications, we provision backup compute resources on edge nodes, which we call placeholders. We prepare for only one node failure at a given moment, so we dimension the placeholders for the maximum number of Pods on any node to fail at once. Therefore, each placeholder has a resource demand depending on two factors: i) how many Pods it supports simultaneously; ii) how the backed up Pods are distributed among the nodes. A placeholder's size is not necessarily equal to the sum of the supported Pods' sizes: it can be less if the supported Pods are placed on different nodes. A Pod's placeholder must be assigned to a node that differs from the host of the Pod, and the placeholder must also fulfill the Pod's latency requirement. Since edge nodes are prone to failures and we strive to ensure high reliability for all the applications, we want to make sure if a single node failure occurs, placeholders in the system have enough reserved resource to restart all Pods of the failed node. We consider that the resources on edge nodes are expensive, since edge nodes have limited resource capacity compared to the large data centers. The Pods' have computational characteristics that our system needs to ensure, i.e., processor, memory, network bandwidth. Regarding of these two properties, one of our main goals is to minimize the resources reserved for placeholders in the system. An example view of our system architecture with two simple scheduling result is presented in Fig. 1 with a central cloud, and several edge nodes. On the left side of Fig. 1, a non-optimal scheduling example is presented. In this case, the amount of backup resource (placeholder) reservation is greater than what the optimal solution would need. The result of how an advanced scheduler would deploy both the Pods and the placeholders can be seen on the right side. Since both of the Pods have common servers in their latency radius, their placeholders can be multiplexed in order to decrease the provisioned extra resources but still high reliability is ensured for the Pods. Our edgescheduler, an advanced scheduler extension, integrates and improves the rudimentary solutions described in [8,27]. The architecture of our edge-scheduler with some mandatory operations of each component is visualised in Fig. 2. The main components in our edgescheduler are the Monitoring, Event handler, Clustering, Scheduler, and Re-scheduler. We introduce these components throughout the next section. Scheduler: Online Pod Scheduling for Fulfilling Delay Requirements Our online scheduler component is in charge of deploying the incoming Pod requests on the fly, with the awareness of their delay and computational requirements, and also of deploying respective placeholders for ensuring high reliability. It works in polynomial time and its approximation ratio is 3 in terms of total amount of placeholder resources sacrificed for guaranteeing high reliability against single node failures. The operation of our online Pod scheduler The scheduler works in an online manner, it processes the users' application requests one-by-one at the time of their submission. The major steps of scheduling are showcased in the schedule box in Fig. 2 and Algorithm 1. Since our scheduler must give a solution that meets the delay requirements, the scheduling starts with the identification of the options that the Pod's requirements allow. More precisely, it pinpoints the nodes that are in the radius of the given origin (Line 8 of Algorithm 1). In case of the origin is a Pod, the algorithm defines its current host and gets the nodes around that. It is possible that none of the nodes in the radius have enough computational resource for the new application, i.e., processor, memory, network bandwidth. If none of the listed nodes have enough computational resource, our algorithm tries to migrate Pods from their current hosts to somewhere else, in order to free up some resource for the actual request (Line 11 of Algorithm 1). We present the dynamic operational challenges that our algorithm must solve, i.e., migration and fail-over, in Section 3.4. During the Pod scheduling, we have to keep in mind the scarcity of the edge resources. Therefore, our algorithm first tries to place each Pod without increasing the total placeholder size in the system, keeping in mind the delay requirement (Lines 15 and 26 of Algorithm 1). When we can not find any solution that keeps the total backup resource size intact, we have to deploy the Pod first and then create a new placeholder or increase an existing placeholder's size for the Pod (Line 28 of Algorithm 1). Our node selection strategy for Pods favors dispersing them among nodes, leading to a balanced utilization in the system, which in turn decreases the necessary placeholder resources to support single node failures (Lines 20 and 22 of Algorithm 1). In contrast, the placement of placeholders favors those nodes that have a high number of nodes in their vicinity in terms of delay: these "central" nodes are good choices for placeholders, since they can support Pods on many nodes around them. The scheduler does not cover a Pod with a placeholder if the available computational resources do not allow the placeholder creation or size increase, or the delay requirement is so strict that only the starting node appears in the radius (Line 17 of Algorithm 1). Complexity Analysis of our Proposed Scheduler The scheduler algorithm processes the incoming Pod requests at the time of their arrival. Therefore, in a globally available system, the scheduler component need to act fast when a request comes in. We state that our scheduler runs in polynomial time, which we state in Theorem 1. Theorem 1 Our proposed online scheduling algorithm has polynomial complexity. Proof Let us denote the set of nodes with N and the set of Pods with P . In the beginning, getting the nodes in the radius around the origin can be done in O(|N|). Then, the online scheduling algorithm tries to deploy the incoming Pod without increasing the total placeholder size in the system. Regarding that, the algorithm collects the placeholders in the radius and checks the network and computational constraints. This collection and constraint check have the following complexity: O(|N| 2 + |P | 2 ). In the next step, the algorithm sorts the nodes based on their number of deployed Pods and their number of network connections that fulfill the delay requirements. In the worst case scenario, the algorithm has to do this sorting two times, which means, its complexity can be approximated with O(2|N| log |N|) = O(|N| log |N|). After the sorting, the selection of the best fitting node and the deployment takes constant time. To summarize, the complexity of our online Pod scheduling algorithm (without migration) can be approximated by O(|N|+|N| 2 +|P | 2 +|N| log |N|) = O(|N| 2 +|P | 2 ), which equals with O(|N| 2 ) when |N| > |P |, and O(|P | 2 ) when |N| < |P |. Approximation Bound on Placeholder Provisioning In this section we prove that our scheduler is a 3approximation algorithm in terms of the amount of placeholder allocation for Pods. As the first step, let us create a graph G = (V , E), where the vertices represent the nodes and the edges of the graph present the connection between the nodes. We denote the set of Pods as P . In the proofs of the approximation we use the graph's diameter d(G), which is the length of max u,v∈V d(u, v) the "longest shortest path" between any two graph vertices (u, v), where d(u, v) is the distance between the vertices. We define the group of vertices that we call buds in Definition 2. Definition 2 A vertex is a bud, if it connects to at least one leaf. Furthermore, we make an assumption about the graph model and the latency requirements of the Pods in order to render the approximation analysis of our scheduler algorithm analytically tractable. The first part of the assumption is about the size of the topology and the resource capability of each node. The second part simplifies the number and the requirement of Pods to be deployed. Assumption 1 G is a simple, connected graph, with |V | = n > 3, each vertex in G represents a node in the Kubernetes cluster and has infinite capacity. Edges in G represent unit latency distance between the vertices. |P | = |V |, moreover each Pod p ∈ P has unit resource requirement, i.e., homogeneous Pod sizes, and there is a one-to-one mapping between the Pods' origins and the vertices in the graph: p i → v i ; p i ∈ P , v i ∈ V , i.e., every vertex is origin for a Pod. The delay requirement of each Pod makes the neighboring nodes of the Pod's origin eligible, no other nodes, i.e., nodes farther than 1 hop yield too much delay for the service deployed in the Pod. Note that in the following we consider Assumption 1 to hold. It is partly a relaxing assumption, e.g., in terms of Pod-, and placeholder placement as infinite node capacities are supposed, but partly specific, e.g., in the aspect of origin selection. In terms of latency requirements, the assumption considers an extremely restrictive scenario. The goal of an economical scheduler is to find the minimum amount of placeholders that can support all Pods in the system in case of one node's failure. Let us denote by OP T the optimal solution and by H EUR the solution that our online scheduler algorithm yields. Let us denote the number of buds as b, and the diameter of the graph d. The lower bound of the optimal solution can be deduced from the number of buds and the diameter of the graph. Therefore, we define the lowest amount of placeholders that can be theoretically achieved in Lemmas 1 and 2 using the diameter and number of buds respectively. Proof G with diameter d has at least one shortest path with length d and must have d + 1 nodes. Thus, there is a subgraph G in G that can be represented as a path graph, which has d + 1 vertices. The Pods' delay requirement allows only the origin node and its neighbors (see Assumption 1) as their hosting node. Since every node is an origin for a Pod, the number of vertices in each Pod's radius (whose origin is in G ) is 2 or 3 in G . In the path graph representation the minimum number of sets that cover all nodes at least once and each set contains only neighboring nodes, equals to dividing the nodes into groups of three. One can see that the number of sets gives the minimum number of placeholders in G, that should be deployed. Lemma 2 OP T ≥ b Proof We know that a bud is connected with at least one leaf, and each Pod's latency constraint allows only the neighbors of the origin node. Therefore, only two nodes (a bud and the leaf) are in the radius of the Pods, whose origin node is a leaf. Regarding that, one of the nodes in each bud-leaf pairs must hold a placeholder. From this statement, one can see that the number of placeholders must be greater or equal to the number of buds. We state, with Lemma 3, that our heuristic solution will have at least one Pod, which shares its placeholder with at least one other Pod. Lemma 3 H EUR ≤ n − 1. Proof H EUR ≤ n, as |P | = n. By the heuristics applied in our online scheduling algorithm, equality occurs only in the case when placeholders cannot be multiplexed. This would occur only in a G with 1degree vertices, which is impossible with n > 3, hence the statement. In order to prove the approximation bound of our scheduler, we have to identify the proportion between: i) the diameter and the optimal amount of placeholders; ii) the number of buds and the amount of placeholders provided by our heuristic solution. Therefore in Lemma 5 and Lemma 6 (in the Appendix) we prove that the number of placeholders is directly proportional with the number of buds and the diameter value, as well. Simple, connected graphs can have diverse combinations of diameter value and number of buds that affect the number of placeholders provisioned in the system. In Lemma 7 we present the possible graph architectures that simple, connected graphs can have with diverse diameter and bud value combinations. From Lemma 5 and Lemma 6 we can draw the following relationship: Based on this observation, we define the approximation bound of our heuristic solution in the combinations of diameter and the number of buds where the latter is minimal and the former is maximal. To summarize the previously presented results, we state and prove the approximation bound of our scheduler algorithm in Theorem 2. Theorem 2 Our online scheduling solution is a 3approximation algorithm for providing joint placement of placeholders of Pods (H EUR ≤ 3OP T ). Proof In Lemma 8 (in the Appendix) we prove that on all possible inputs, the approximation ratio between our heuristic solution and the optimal solution is always less than or equal to 3. Therefore, our scheduler is a 3-approximation algorithm in terms of the amount of placeholder allocation under Assumption 1. Pod Migration and Fail-Over There are certain dynamic operational challenges that scheduling algorithms must face; for remedy we propose a migration policy. Network-aware migration of deployed Pods is triggered when a new Pod request comes in, but the available resources are not sufficient. In these situations we migrate the affected Pods to new nodes to avoid disruptions. The major steps of migration, and the flow of the process between them is presented in the "Migrate" box in Fig. 2. Although we strive to make room for the incoming Pod in the system, we migrate Pods only if their relocation frees enough resources and their assigned placeholders' size remains the same. While the online scheduling will inevitably lead to suboptimal resource allocation for the placeholders, i.e., more resources will be dedicated to backup than the absolute minimal amount at the highest attainable multiplexing scheme for single node failures, we are not sure how often migration events will need to take place. As the authors of [9] argue, edge computing is the strongest candidate for providing low-latency responses, but it is not yet clear what edge infrastructures will be like. In addition to that, the edge applications' dynamics and their latency requirements will greatly affect the frequency of migrations. Our solution can also handle topology changes dynamically. The fail-over process is triggered, when our scheduler perceives that a worker node is unreachable, or a delay deterioration in the infrastructure spoils Pods' delay constraints. In these cases we use the already provisioned placeholders to restart the respective Pods within their placeholders' resources. After the restart, we remove the Pods from their original placeholders, and try to find or create new placeholders for them. Both Pod migration and fail-over appear in our online algorithm. Since we consider the delay requirements as hard constraints, both of these methods take the delay requirements into account. Our re-scheduler operates on all Pods at once, which renders migration or fail-over meaningless during its execution. Complexity analysis of Pod migration calculation In every system, the migration of virtual entities, e.g. Pods, is an expensive process in terms of execution time and operational steps. Although it is a costly operation, we show that our Pod migrating algorithm runs in polynomial time, and we prove its polynomial complexity in Theorem 3. Theorem 3 The migration calculation in our scheduler has polynomial complexity. Proof Let us denote the set of nodes with N and the set of Pods with P . In the migration process the algorithm knows the new Pod (that cannot be deployed in the system due to lack of resources) characteristics and the nodes that are in the latency radius of the Pod's origin node. Our solution iterates over all those nodes' Pods and try to migrate them till one of the nodes has enough free resource to host the new Pod. This means in the worst case we have to try the migration in O(|P |) times. When we examine a Pod if its "migratable", we check the following constraints: i) the actual node will have enough free resource for hosting the new Pod, in case we migrate the examined Pod to another node (can be done in O(1)); ii) at least one of the nodes in the examined Pod's radius has enough free resource for that Pod (O(|N|)); iii) when a placeholder is assigned to the examined Pod, we do not have to increase its size if we deploy the Pod to a new host (if |P | > |N| then O(|P | 2 ), otherwise |N| 2 log|N|). If all constraints are met, we migrate the examined Pod, so we can deploy the new Pod to its original host. The complexity of Pod migration is As for the technical migration overhead, we argue that stateless [26] application components can be migrated with minimal extra resources. The stateless design, of course, must be supported by a distributed cloud database [24,25], which transforms the punctual migration overhead into a continuous synchronization of application states onto multiple database instances running on nodes potentially hosting the stateless application, which leads to an extra consumption in terms of compute, memory and network resources. Re-scheduler: An Offline Orchestrator to Minimize Provisioned Backup Resources Operating besides the scheduler, our re-scheduler is responsible for the offline minimization of the total provisioned backup resources in the system. The main difference between the two solutions is in the submission pattern of the Pods. While the scheduler works in an online manner, the re-scheduler better approximates the minimum amount of necessary placeholders as it works in an offline manner and it is fed with the batch of all deployed Pods. The Operation of our Re-Scheduler Our re-scheduler has three major phases: i) placeholder deployment; ii) Pod deployment; iii) repair phase. The flowchart of the phases are presented inside the "Re-scheduler" box in Fig. 2. As for the first phase, according to our intuition, the nodes that could host the most Pods are the best choices for placeholders: placeholders on them can cover all those Pods if they are placed elsewhere, which maximizes the multiplexing effect, hence the least possible resources reserved for placeholders. Therefore in the first phase, as shown in Algorithm 2, we reserve the minimum amount of placeholders on the nodes (Lines 2 and 3 of Algorithm 2) that could possibly host all Pods to be deployed. node.deployable pods ← ∅; end The deployment of Pods that can be hosted only on a subset of nodes, e.g., in a strict latency radius, is challenging. The order of Pod deployment follows the number of possible nodes that could host a Pod (Line 6 of Algorithm 3), which mainly corresponds to the tightness of their delay requirements. We deploy the Pods with the fewest options first, then move forward to Pods with looser latency requirements. At the end of this phase, each Pod is deployed and all of them are covered with a placeholder as Algorithm 3 indicates. Pod migration is an expensive operation since during the migration the behavior of the application can be non-deterministic and the service provider has to guarantee the seamless relocation of the components. Therefore, the cost of migration is not negligible in the minimization process in our re-scheduler. When the re-scheduler is triggered, a deployment that defines the host node, determined by the online scheduler, is in effect for each submitted Pod. Relying on that predefined deployment, in the second phase the scheduler strives to deploy the Pods on those nodes that already host the Pod. Due to this behavior, our solution can minimize the number of migrations while still minimizing the amount of total provisioned resources. It is possible that some of the reserved placeholders' size might not be enough to back up all the Pods which have been assigned to them. In order to ensure full reliability, we have to repair those failed placeholders, one input to Algorithm 4. Since we minimize the total footprint of provisioned placeholders, we reassign Pods from each failed placeholder to other placeholders, or re-deploy them to other nodes if migration is favored (Line 2 of Algorithm 4). If we can not find any solution that would keep the amount of total placeholder size on the same level, we have to increase the broken placeholders' size (Line 6 of Algorithm4), or instantiate new placeholders (Line 10 of Algorithm 4). Complexity Analysis of our Re-Scheduler Although our re-scheduler works in an offline manner, i.e., it processes the batch of all the deployed Pods' requirements, we must not let its execution time increase unpredictably. The state of the system is continuously changing: Pods come and go, nodes might fail. If such events occur while the re-scheduler is running, the placement result yield by the algorithm may not be valid anymore. Therefore it is of paramount importance to design the re-scheduling algorithm to be fast. In Theorem 4 we prove that our proposed re-scheduler algorithm runs in polynomial time. Theorem 4 Our proposed re-scheduler algorithm has polynomial complexity. Proof Let us denote the set of nodes with N and the set of Pods with P . In the re-scheduler algorithm's first phase we deploy the placeholders. As the first step, it calculates the nodes in each Pod's radius. This calculation can be estimated with O(|P ||N|). After the calculation, the re-scheduler sorts the nodes by the number of the Pods that could be hosted on them. The complexity of this sorting is O(|N| 2 ). In the next step, it deploys each placeholder and reorders the list after every deployment. This step and the whole first phase can be estimated with In the second phase the re-scheduler places the Pods on the nodes. First, it sorts the Pods by their number of fitting nodes. The complexity of the sorting is O(|P | 2 ). Then, the algorithm iterates through the sorted Pods, and deploys them on the least utilized node. The worst case complexity of this iteration can be estimated by O(|P ||N|). The worst case complexity of the whole second phase, if |P | > |N|, is O(|P | 2 ), anyway the worst case complexity is O(|P ||N|). In the third phase, the algorithm checks if any broken placeholders exist, and repairs the failed ones. When it checks the placeholder constraints, it iterates through all the placeholders and for each of them it also iterates through all the nodes and the Pods. The worst case complexity of this validation is O(|P ||N| 2 ), as in the worst case each node contains one placeholder. Then, the re-scheduler goes through all the broken placeholders, and tries to fix them with Pod reassignment to different, already instantiated placeholders, or to other nodes. During the reassignment for each broken placeholder, the algorithm gets those nodes whose constraints are not fulfilled and tries to reassign the specified Pods. Summarizing, our proposed heuristic algorithm's complexity in the worst case scenario equals Providing Scalability with Node Clustering As the size of the system grows, not only finding the best placement for the service components, but even measuring the network characteristics becomes challenging. The continuous measurement is induced by the fact that the underlying network and topology may change, and such events can cause application failures and delay requirement violations. Our clustering solution not only reduces the overhead of determining network characteristics, but also helps the scheduling of the applications by compressing the topology. By topology compression, we mean that since a clustering algorithm forms groups (clusters) from a set of nodes, the scheduling algorithms do not need to iterate through all the nodes, it is sufficient to calculate with the clusters. Furthermore, our solution provides valuable input for service providers who want to implement self-organizing network features to dynamically organize their federated system hierarchically, based on their network topology. Dynamic, Delay-Based Clustering Problem We propose a clustering algorithm that groups the physical nodes into clusters in order to ensure that dynamic application placement and network delay measurements can scale effectively in large topologies. Our solution is an agglomerative clustering algorithm that creates cluster layers hierarchically, where each layer contains clusters that are constructed with a new delay requirement belonging to a Pod request. The input of our clustering algorithm is a topology (that maybe already clustered before), and a set of delay values that will be used for clustering the nodes (or the clusters) inside topology. In this agglomerative clustering approach, each node starts in its own cluster, then the clusters are merged as we build up the hierarchy, where each layer (and their clusters) are built based on increasing delay requirement values. In cases when the topology is not clustered with a given delay value yet, a new layer is being created (visualized in Fig. 2, in "New layer creation" box inside the "Clustering" box) relying on the underlying layer that is clustered with the delay that is the greatest and closest, but still less than the new one. Regarding the application placement, our clustering mechanism guarantees that all nodes inside a cluster fulfill a Pod's delay requirement in case the cluster is an output of the clustering with that delay value. Hence service delay requirement violations can not happen inside a cluster for the given delay value, no matter which member node hosts the given Pod. The clustering may lead to different outcomes, in which a node may belong to different clusters. An illustrative example can be viewed in Fig. 3, where on the top, a simple topology is depicted with delays on network connections between the nodes. On the bottom, we present how the topology can be clustered based on different delay requirements. The red circled clusters (bottom middle) are non-deterministic, since they overlap each other and the clustering result depends on the processing order of the nodes. There are some delay values (denoted as d in Fig. 3), for which the clustering is deterministic, i.e., each node belongs to only one determined cluster no matter the order of nodes during the clustering process. Formally we define this problem in Definition 3. Definition 3 Given a G = (V , E) graph and a d delay value. G is an undirected complete graph with weighted edges that fulfill the triangle inequality and d is a positive number that represents the delay requirement of a service. DETERMINISTIC CLUSTERING PROBLEM: Can G be clustered based on d in a deterministic manner? A key feature of our solution is to find those delay values (for the given topology) that provide such deterministic clustering results. We call these delays as vantage-free delays and we seek these delays at the initialization of the clustering component ("Initialization" box inside "Clustering" box in Fig. 2). These vantage-free clustering layers can be defined and Lemma 4 We can find an answer for the DETERMIN-ISTIC CLUSTERING PROBLEM in polynomial time. Proof Let us represent our topology with a complete graph G, where the nodes are the vertices and the edges are the smallest available latency values between each node pair. The proof consists two steps. First, we delete the edges from G if their weight is greater than d, in O(|V | 2 ). Then, we examine each disconnected component, if they are complete subgraphs. This examination can be done in O(|V | 2 ). If all of the disconnected components are complete subgraphs (cliques), the output is positive (yes), G can be clustered based on d deterministically, otherwise it cannot. The purpose of defining these delay values and their clustering layers is that they serve well as underlying layers for clustering with other, non vantagefree delay values, since these vantage-free delays do not change unless the topology changes. In order to support the large scale scheduling, the purpose of our clustering solution is to create the least clusters that cover all the nodes and the nodes cannot violate the given delay requirement within their cluster. We call this problem as DELAY BASED CLUSTERING PROBLEM. A formal definition of the DELAY BASED CLUSTERING PROBLEM is given in Definition 4. We state and prove that DELAY BASED CLUSTERING PROBLEM is a hard problem in general in Theorem 5. Definition 4 Given a graph G = (V , E) that represents the physical topology, and d, a positive number that equals to the delay requirement of the service. G is an undirected complete graph, whose edges fulfill the triangle inequality. DELAY BASED CLUSTERING PROBLEM: Clustering the vertices with minimum number of clusters with the awareness of the following requirements: i) a cluster has to be a clique; ii) the weight of every edge inside the clusters is less than or equal to d; iii) clusters can not overlap with each other. Theorem 5 The DELAY BASED CLUSTERING PROB-LEM is NP-hard. Proof In the proof of this theorem, we use Karpreduction for a known NP-hard problem, the CLIQUE COVER PROBLEM, which is the algorithmic problem of finding a minimum clique cover. A clique cover of a given undirected graph is a partition of the vertices into cliques. As a preparation step, we construct G with deleting all of the edges with greater weight than d from G, since those edges surely will not appear inside any cluster. After this step, we can ignore the edge weights in G . In this case we strive to find the minimum number of not overlapping clusters that are cliques, and cover all the vertices. Note that, since service delay requirement violations can not happen inside a cluster for the given delay value, the clusters can only contain cliques in our solution. Let G be the input of the clique cover problem. In this case, one can see that finding the minimum clique cover (the clique cover that uses as few cliques as possible), equals with finding the minimum clusters (that are also cliques) that solves the DELAY BASED CLUSTERING PROB-LEM. Also a solution for DELAY BASED CLUSTERING PROBLEM gives a good clique cover for the MINIMUM CLIQUE COVER PROBLEM. Since G can be constructed in polynomial time (O(|V | 2 )) from G, and covering with minimum clusters on G is fully complaint with CLIQUE COVER PROBLEM on G , DELAY BASED CLUSTERING PROBLEM is NP-hard. We proved that the DELAY BASED CLUSTERING PROBLEM is a hard problem in general, although in some cases it is solvable in polynomial time. In Theorem 6 we state that the DELAY BASED CLUSTERING PROBLEM is solvable in polynomial time in cases when DETERMINISTIC CLUSTERING PROBLEM gives positive answer for the same input. Proof In Lemma 4 we proved that we can find an answer for the question of Definition 3 in polynomial time. In cases, when the question of Definition 3 has a positive answer if we delete all edges with greater weight than d from G, then all the disconnected components of G (after the deletion) are complete subgraphs. Since none of the remaining edges has greater weight than d, the optimal solution for DELAY BASED CLUSTERING PROBLEM equals with the disconnected complete subgraphs. Operational Steps of our Clustering Algorithm When our clustering component is initialized, the scheduler receives the node clusters from our clustering component when they are looking for nodes in a certain delay radius. The scheduler calls the clustering component with the origin node and the delay radius required by the Pod. If there is already a constructed cluster layer with that delay value, the algorithm finds the appropriate cluster (that holds the starting node), and returns that to the scheduler. If the topology is not clustered with the given delay value yet, the algorithm creates a new layer that is based on the underlying layer that is clustered with the delay that is the greatest and closest to the new one, but still less than the given one. Relying on that underlying layer and its delay matrix, the algorithm creates the new layer in six steps: 1) delete the edges that are greater than the delay requirement; 2) find a maximal clique in the graph; 3) create cluster from the found maximal clique; 4) remove the vertices in the clique from the graph: 5) return to step (2), until all of the vertices are deleted from the graph; 6) create the new delay matrix. After we defined the clusters with the new delay requirement, a new delay matrix must be created that holds the delay values between the clusters. We apply the conservative complete-linkage clustering method (also known as farthest neighbour clustering) for delay matrix creation to calculate the delay values between clusters. The complete-linkage clustering method means that the delay value between two clusters equals to the delay between those two nodes (one in each cluster) that are farthest away (have the highest delay) from each other. Implementation Choices Today's most widely used resource and service manager is Kubernetes [17]: it orchestrates containers, provides automatic scheduling, scaling and self-healing features. We have built a prototype of our edge-scheduler that we integrate with Kubernetes. In this section we present Kubernetes extensions, including ours, that propose edge computing support. Kubernetes on the Edge Kubernetes distinguishes two types of nodes, which might be either virtual or physical machines: i) the master, who is responsible for coordinating the cluster; and ii) the worker nodes. Pods are the smallest deployable units of computing in Kubernetes; several Pods can be instantiated on a node, and the Kubernetes master schedules and manages Pods across nodes in the cluster. A Pod contains one or more containers, e.g., Docker containers, with shared storage/network, and a specification for how to run them. As edge computing becomes rather the norm, than the exception, the community has started to work on extending Kubernetes with several capabilities that support the operation in edge computing systems. KubeEdge [15] is built upon Kubernetes to extend application scheduling capabilities to nodes in an edge computing environment. It provides infrastructure support for networking and application deployment between cloud and edge. Beside KubeEdge, another multi-cluster oriented framework has attained the community's attention. Kubefed [16] (Kubernetes Cluster Federation) is the official Kubernetes cluster federation implementation. It allows the management and configuration of multiple Kubernetes clusters with a single set of APIs from a "host cluster". Neither KubeEdge and Kubefed feature network characteristic measurements between the worker nodes nor make guarantees about fulfilling delay requirements that novel applications would pose. Both of these frameworks apply the default Kubernetes scheduler, and they work on manually and preliminary constructed clusters, i.e., they do not support dynamic cluster creation based on e.g., network properties. K3s [12] and Microk8s [19] are frameworks that make Kubernetes lightweight by eliminating some of its components and by reducing the operational overhead. Although, a distributed system can enjoy the small footprint operation capability that K3s and Microk8s provide, in edge computing, beside the limited physical resources there are other issues that the system has to face with, e.g. unreliability, large scale, service level agreements, etc. None of the listed frameworks are fully capable of handling an edge cloud system with latency-critical application requests. They proposed some steps forward in order to support an edge cloud system, but we argue that they are missing features that must be considered when a fully operational geo-distributed manager is proposed. These deficiencies call for an advanced system design that we propose throughout this article. Extending Kubernetes with our Prototype Our prototype extends Kubernetes [17] with the ability to work in a large scale edge computing topology and make latency critical Pod scheduling possible, while ensuring high reliability. Our solution works next to Kubernetes' default scheduler and extends its functionality with awareness to network characteristics. Beside the previously seen scheduler, re-scheduler, and clustering modules, there are two more major components that make Kubernetes-edge-scheduler a fully operable edge-scheduler. The Monitoring component in Fig. 2 is responsible for collecting the underlying network delays. The measurement of the delay values can be performed in three ways with our implementation: i) with "ping pods" that are distributed between all nodes and send the measured delay values (between all node pairs) to Kubernetesedge-scheduler; ii) with Goldpinger [6]; iii) with static files. The Event handler (also presented in Fig. 2) connects Kubernetes system events, e.g. Pod submission, Pod removal, etc. with our solution. It subscribes to important events, and after some pre-processing, it forwards the event to our system. The source code of our implemented solutions has been released [18]. Comprehensive Evaluation of our Kubernetes Scheduler In this section we present the efficiency of our Kubernetes-edge-scheduler with comprehensive evaluation scenarios. First, we run large scale simulations to evaluate the performance and scalability of our scheduler and re-scheduler algorithms, then we deploy a streaming analytics application in a Kubernetes cluster featuring our proposed scheduler to demonstrate the benefits of the delay-awareness in the edge. In a realistic operation mode, customers send their requests in an online manner, one after another distributed in time. The offline placeholder minimization process, however, takes a single batch request that contains all, already submitted Pods. To make the best of the two worlds, our online scheduler and our offline re-scheduler work together: while the scheduler does the dynamic operations, like Pod scheduling, the re-scheduler can periodically minimize the size of placeholders provisioned in the system. Although the provider has to find the balance, how frequently the re-scheduler should run, and how they manage the changes that the re-scheduler proposes. The offline minimization process is expensive and the state of the system may even change during its execution. The cost of the placeholder minimization in fact relies on two factors: i) the calculation demands extra computational resources; ii) resulting Pod migrations cause instability to applications. Multiple triggering criteria can be defined to control the start of executing the offline minimization procedure. A feasible triggering criteria can be a threshold of the ratio between the size of the deployed Pods and the size of the provisioned placeholders. This value can give a good insight about the efficiency of the placeholder provisioning and the utilization of the system. Large Scale Simulation Setting In our experiment setup we simulate high numbers of edge nodes in the range of 500 to 10000, and a number of Pods in the same order of magnitude to be scheduled (and then re-scheduled for optimization) on those edge nodes. The resource capacity of edge nodes are identical, and are represented by 12GB of memory. On the other hand, the inter-node delay values are heterogeneous, due to their hierarchical network setting: we assume 3 edge nodes in one server rack with 1ms delay between any pair of them, and 2-3 racks in close vicinity with 5ms delay between the edge nodes within. The network topology of our simulations is similar to the one depicted in Fig. 1 with delays of 10ms, 20ms, and 40ms on the links from bottom up. Therefore the delay between any 2 edge nodes is between 1ms and 152ms. Central cloud is supposed to be reachable at the top of the network hierarchy, so the network latency between any edge node and the cloud is 76ms. The computational resource demand of Pods are represented homogeneously by 1GB of memory for simplicity. Their delay requirements, however, are randomly drawn from the following values: 1, 20, 50, 60, 70, 80, 90, 150, 160, 170 ms. The corresponding origins for which these delay limits are defined are randomly scattered over the locations of edge nodes. The order of deploying the Pods' with the online scheduler is totally randomized over the measurements. For all parameter settings we run 100 measurements to account for the randomized infrastructure, Pod delay requirements and Pod deployment order. Evaluation of Placeholder Provisioning We compare the performance of our scheduler and rescheduler in terms of how effectively they provision the resources to provide high reliability. In the left side of Fig. 4 we compare the created placeholders' total size to the total amount of resources of deployed Pods with both of our scheduler and re-scheduler. The right plot in Fig. 4 shows the size of the Pods, that need to be migrated after the re-scheduler finished, compared to the total amount of resources of the deployed Pods. In each scenario we scheduled Pods triple the number of edge nodes. All scenarios, distinguished with different colors per plot in Fig. 4, ran on topologies of various sizes. The number of edge nodes is depicted on the x-axis. The y-axis shows the achieved size ratio in percentage (lower value is better). We evaluate our re-scheduler in two scenarios. The scenarios differ in the strategy of Pod relocation in the repair phase. In the first scenario, we favor migration: the algorithm migrates a Pod when the migration fixes a failed placeholder in the repair phase. In the other scenario, we rule out the migration possibility during the repair phase. The re-scheduler always achieves better performance, i.e., more compacted placeholders, compared to the online scheduler's results (see the left chart). This is expected, since the re-scheduler operates on all Pod requests at once, in contrast to the online algorithm, which strives to minimize the placeholders' size, but it processes only one request at a time. We achieve the best placeholders/Pods ratios with our re-scheduler when the migration is favored. In these cases, the size ratios are around 8%, while in other scenarios, where we consider the migration too expensive, we achieve results around 8.5% and the online algorithms scores between 9 to 10% (see the left plot in Fig. 4). Although beside the improvement in the total size of backup resources, the number of migrations can be relatively high, around 18% of the total Pods, when we favor the migration. In contrast, when the migration is avoided during the repair phase, the number of migrated Pods is around 6% (see the right plot in Fig. 4). The achieved size ratio results are To conclude the results presented in Fig. 4 we state that both our scheduler and re-scheduler can effectively provision resource for placeholders and Pods, since in all cases the size ratios are less than 10%, which means our schedulers provision less than 10% of the requested resource of applications to provide high reliability for all applications with the aware of their delay criteria. Evaluation of Execution Times We evaluated our scheduler and our re-scheduler with and without the clustering feature on topologies of different sizes. The achieved execution times are presented in Fig. 5, where the x-axis shows the number of edge nodes and the y-axis presents the achieved runtime. In all scenarios, we submitted Pods, triple of the number of edge nodes. An evaluation started when the first request was submitted and ended when the algorithms gave their final deployment. Although the re-scheduler has polynomial complexity (Section 4.2), it took the most time to return its result deployment. In all scenarios, we either deployed all Pods with the online algorithm one-by-one, or the re-scheduler received the same request set in one batch to work with. Both of the scheduler and re-scheduler benefit from clustering in terms of execution time, since in every case the algorithms finished earlier with clustering, e.g., on 10000 edge nodes the rescheduler with clustering deployed all 30000 Pods on average in 263 seconds, but it took 317 seconds without clustering. A summarizing table, Table 1, shows the complexities for both the offline minimization, the online scheduling and the migration methods. We denote the set of nodes with N and the set of Pods with P . The presented results exclude the initialization phase, since it is not an integral part of the scheduling process, and we have to find the vantage-free layers once. Since we use hierarchical clustering, as the number of layers increases, the clustering becomes faster. This effect can be seen in Fig. 5. The clustering has negligible overhead, as the execution times are improved with the clustering feature, since the base layers were constructed with the vantage-free delays. To conclude the results presented in Fig. 5 we state that Kubernetes-edge-scheduler scales well for both the growing topology and for the increasing number of requests. Even the slower re-scheduler algorithm took less than 300 seconds for mapping 30000 Pods, which means it took an average of 10ms to deploy a single Pod. While concerned about the quadratic complexity in terms of number of nodes, we strive to decrease the search space of the algorithms with the help of clustering the nodes based on latency measurements among them, we note that the runtime is majorly affected by the latency requirements of the applications and the capacity of edge nodes: if both the former and the latter are stringent, then the search space is greatly reduced, possibly resulting in orders of magnitude lower runtimes. We argue that for typical delay-critical applications and edge node infrastructures this will be the case. Stream Analytics in the Edge We demonstrate the benefits of using Kubernetesedge-scheduler in a real-world Spark [30] streaming application that is deployed over an emulated edge computing topology. In our use-case, we have a streaming application that receives a text file through a network socket and executes a word-count application on it on the fly. In our topology we had 10 edge nodes that are close together, and 1 worker node that represented the cloud. We compared the completion time of the word-count application deployed with both the default Kubernetes scheduler and Kubernetes-edgescheduler. The results are presented in Fig. 6, where the x-axis shows the number of allocated executors (Pods) for the application, the y-axis shows the execution time in seconds. The results show that the Spark streaming application always finished earlier in the cases when Kubernetes-edge-scheduler deployed its Pods. This outcome confirms that network delay affects the streaming applications performance, and as Kubernetes-edge-scheduler takes into account the network delay between the application components, it achieves better performance. More precisely, our solution reduced the average execution time by more than 60%. Using the default scheduler leads to higher variance in execution times, which is explained by the default scheduler policy that chooses nodes randomly, disregarding the characteristics of the underlying network connections. In contrast, Kubernetesedge-scheduler considers network latency during Fig. 6 Execution times for a Spark streaming application scheduling, so the variance of execution times is lower. State-of-the-art on Reliability and Delay Guarantees of Scalable Edge Cloud Platforms In this section we present the major achievements in the literature related to our Kubernetes-based edge cloud orchestrator. We divide the discussion of the state-of-the-art into parts on i) the application requirements and features of the edge cloud, ii) high reliability and availability concepts in virtual resource environments, and finally, iii) the implementation efforts towards scalable edge cloud platforms. Latency-Critical Cloud-Native Applications and the Edge Cloud Latency-sensitive and data-intensive applications, such as IoT or mobile services, are leveraged by edge computing, which extends the cloud ecosystem with distributed computational resources in proximity to data providers and consumers. This brings significant benefits in terms of lower latency and higher bandwidth. However, by definition, edge computing has limited resources with respect to cloud counterparts; thus, there exists a trade-off between proximity to users and resource utilization. Moreover, service availability is a significant concern at the edge of the network, where extensive support systems as in cloud data centers are not usually present. To overcome these limitations, [1] proposes a score-based edge service scheduling algorithm that evaluates network, compute, and reliability capabilities of edge nodes. The algorithm outputs the maximum scoring mapping between resources and services with regard to critical aspects of service quality. [9] introduces a new platform for enabling an edge infrastructure according to a disaggregated distributed cloud architecture and an opportunistic model based on bare-metal providers. Results from a multi-server online gaming application deployed in a real geo-distributed edge infrastructure show the feasibility, performance and cost efficiency of the solution. In order to meet the rapidly changing requirements of the cloud-native dynamic execution environment, without human support and without the need to continually improve one's skills, autonomic features need to be added. Embracing automation at every layer of performance management enables us to reduce costs while improving outcomes. Kosińska and Zielinski [14] lists the definition of autonomic management requirements of cloud-native applications, and the authors propose that the automation is achieved via high-level policies, while autonomy features are accomplished via the rule engine support. One such feature of online scheduling in a cloud-native context is migration. A large body of research has tackled the issues around migration of virtual machines, containers, etc. in the cloud. E.g., [10] proposes an energy-aware virtual machine migration technique for cloud computing, which is based on the Firefly algorithm. The proposed technique migrates the maximally loaded virtual machine to the least loaded active node while maintaining the performance and energy efficiency of the data centers. In the era of cloud services, there is a strong desire to improve the elasticity and reliability of applications in the cloud. The standard way of achieving these goals is to decouple the life-cycle of important application states from the life-cycle of individual application instances: states, and data in general, are written to and read from cloud databases, deployed close to the application code. Rooted in cloud-native computing, the stateless design outsources the state embedded in computing entities, e.g., virtual machines, containers, Pods, virtual network functions, to a dedicated state storage layer, facilitating elastic scaling and resiliency [26]. In [26] the authors propose a system design that can be adapted to any cloud application without the need for complex coordination among the network control, the stateless application elements, and the state storage backend. They present the first product-phase realization of the stateless paradigm, an operational virtualized IP Multimedia Subsystem that can restore the live call records of thousands of mobile subscribers under a couple of seconds with half the resources required by a traditional "stateful" design. The high performance requirements on the application impose strict latency limits on these cloud storage solutions for state access. Cloud database instances are therefore distributed on multiple hosts in order to strive to ensure data locality for all applications. However, the shared nature of certain states, and the inevitable dynamics of the application workload necessarily lead to inter-host data access within the data center (or even across data centers, if the application requires a multi-data center setup). In order to minimize the inter-host communication due to state externalization, the authors of [25,26] propose an advanced cloud scheduling algorithm that places applications' states across the hosts of a data center. In such a cloud-native setting, stateless cloud applications and an adaptively self-synchronizing distributed cloud database alleviate the long-standing issues of live migration within the cloud. Ensuring High Reliability and Availability on Virtualized Resources Several research papers have been published that all propose some kind of scheme for improving the availability and reliability of applications in the inherently untrustworthy context of edge cloud infrastructure. Javed et al. [11] tackled the problem of separate software stacks between the edge and the cloud with no unified fault-tolerant management, which hinders dynamic relocation of data processing. In such systems, the data must also be preserved from being corrupted or duplicated in the case of intermittent long-distance network connectivity issues, malicious harming of edge devices, or other hostile environments. A self-adapting scheme named SAB is proposed in [23]: SAB uses static and dynamic backups for VNFs (Virtualized Network Function) over both the edge and the cloud in order to provide high availability. Fan. et al. [4] propose a framework to provision availability of SFC (Service Function Chain) requests in a data center. None of these research works consider multiplexing backup resources for multiple virtual instances like our Kubernetes-edge-scheduler does. Yala et al. [28] propose a solution for their optimization problem that strive to optimize the trade-off between availability and latency. However, their work deploys VNFs without deploying backup resources. Although, the solution of Kanizo et al. [13] and RABA [31] both multiplex backup resources, however they ensure high availability for VNFs with dedicated backup nodes. In contrast, Kubernetes-edge-scheduler does the resource provisioning on general nodes that contains Pods as well. In [2,5] the authors consider the replica and virtual function placement to achieve lower migration time, however they did not consider minimizing the provisioned resources assigned to replicas. Authors of [29] investigate the fog resource provisioning problem for deadline-driven IoT services to minimize the cost considering the probability of resource failures. They assume that virtual machine failures are temporary and recoverable. In contrast, we argue that each node's failure should be prepared for. Latency-Aware Cloud Platforms An online resource orchestration algorithm which takes into account network aspects is proposed in [7]. The algorithm enables the orchestrator of Open-Stack to manage a distributed cloud-fog infrastructure. An embedding algorithm is proposed in [21], which instantly deploys end-to-end delay-constrained services while applying a cost-aware VNF migration strategy. The authors' hybrid orchestration approach unites the advantages of online heuristics and offline optimization in their service orchestration method, with the goal of providing fast service placement and minimizing the cost due to VNF migrations. The research community have already started extending the Kubernetes scheduler to support edge computing architectures with network awareness [3,20,22]. In contrast to our work, the scheduling method in [3] does not take into account the delay directly between the edge nodes. Authors of [20] proposes a content delivery method that improves Kubernetes scheduler with awareness to network distance using AS path of BGP. In contrast, we use the measured delay values as the network distance property. An extension, called Network-Aware Scheduler, is implemented in [22] enabling Kubernetes to make resource provisioning decisions based on the network infrastructure properties like latency and bandwidth. Although, the application requests in [20,22] do not define latency requirement that has to be met during their scheduling. Furthermore, none of the previous works consider providing high reliability for the applications, and dynamic topology clustering to relax the difficulties that a large scale architecture poses. We argue that the main goal of edge computing, i.e., hosting delay critical applications, must be aware of not only network latency, but it must also take into account the unreliability and the large number of edge nodes. A key difference between previous works and our solution is that none of [3,7,[20][21][22] deal with reliability, while our proposed solution achieves high reliability with the consideration of minimizing the resources provisioned for this cause. Conclusion In this article we proposed Kubernetes-edgescheduler, a novel scheduler that extends a Kubernetes system to operate on an edge computing architecture and to manage latency critical, novel applications. Our contribution is fourfold: i) we defined placeholders that help to guarantee high-reliability for edge applications with the provision of backup resources; ii) we proposed an online Pod scheduler algorithm that deploys latency critical Pods on the fly, reacts to network and Pod related system events, and provides high reliability for applications; iii) an offline re-scheduler is presented that reduces the provisioned backup resources that ensure the high reliability in the system; iv) a latency based clustering method is proposed for addressing the difficult task a possibly worldwide-scale topology would pose in network latency measurements, Pod scheduling, etc. Using both emulated and simulated experiments we showcased the effectiveness of our solution in terms of end-to-end application delay, the amount of provisioned resources and the scaling quality of our Kubernetes-edge-scheduler. Funding Open access funding provided by Budapest University of Technology and Economics.
2021-07-28T13:26:22.934Z
2021-07-17T00:00:00.000
{ "year": 2021, "sha1": "c8dac87721f4f0bb2630c246881095673cd6c08e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10723-021-09573-z.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "6a97f3a30ee1d5f9268739ccac01182039a49cdc", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
220042788
pes2o/s2orc
v3-fos-license
Respiratory Activity Classification Based on Ballistocardiogram Analysis Ballistocardiogram signals describe the mechanical activity of the heart. It can be measured by an intelligent mattress in a totally unobtrusive way during periods of rest in bed or sitting on a chair. The BCG signals are highly vulnerable to artefacts such as noise and movement making useful information like respiratory activities difficult to extract. The purpose of this study is to investigate a classification method to distinguish between seven types of respiratory activities such as normal breathing, cough and hold breath. We propose a feature selection method based on a spectral analysis namely spectral flatness measure (SFM) and spectral centroid (SC). The classification is carried out using the nearest neighbor classifier. The proposed method is able to discriminate between the seven classes with the accuracy of 94% which shows its usefulness in context of Telemedicine. Introduction The development of connected object for personalized services, especially for monitoring purposes, have significantly increased worldwide over the last few years [1]. More specifically those that deals with the monitoring of respiratory and cardiac diseases. Indeed these diseases are among the leading cause of death and disability in the world. One of these respiratory diseases is the Chronic Obstructive Pulmonary Disease COPD [2] a progressive life threatening lung disease. According to the World Health Organization [3], COPD affects more than 250 million cases globally, a staggering 3.17 million deaths per year and is associated with a huge economic burden. In fact, numbers published by the Global initiative for Chronic Obstructive Lung Disease [4] shows that the direct costs of respiratory disease in the European Union are estimated to be about 6% of the total annual healthcare budget with COPD accounting for 56% (38. 6 billion Euros) of the cost of respiratory disease. These numbers are further amplified by the ever-growing healthcare costs, the aging of the population and the widespread of such diseases. The monitoring of respiratory activities plays an important role in the current management of patients with acute respiratory failure [5]. As a consequence, it is recommended to have continuous monitoring of the vital signs to ensure an optimal diagnosis of a patient's state [6]. Moreover, monitoring of respiratory activity is useful for detecting respiratory disorders, such as the sleep apnea, cessation of breathing in infants, shortness of breath in patients with heart failure, and so on. Hence, it is important to monitor respiratory activities such as normal breathing, cough, hold breath expiration. A new generation of sensor-based mattress is able to unobtrusively monitor vital signs such as the Heart rate Beat Rate (HBR) and the Respiratory Rate (RR). Indeed, this study considered an Optical Fiber based Sensor (FOS) [7] for the unobstructed monitoring of the Ballistocardiogram (BCG) signal. Due to the ejection of the blood during the systole, the body's mechanical reaction is measured hence the BCG signal. Our aim is to investigate a classification method to distinguish between several types of respiratory activities such as normal breathing, cough and hold breath using the BCG signal. This paper is organized as follows. Section 2 is dedicated to describe the material and method. It describes the data collection, BCG signal analysis and feature extraction and classification. Section 3 provides information about the experimental results mainly feature illustration and classifier evaluation. Finally, Sect. 4 concludes the study and gives perspectives. Data Collection The system used for collecting data includes a small FOS mattress and a module to gather optical data coming from the mattress [8,9]. The FOS mattress was fixed on the back of a regular office chair. The raw data is sampled at 50 Hz by the module. The BCG signals were acquired on 6 healthy participants: 3 male and 3 female aged between 21 and 32 years. The participants were asked to perform a certain experimental protocol. A part of normal breathing, other human body activities that commonly occur are introduced in this protocol. It is composed of the following steps by following activities: normal breathing (C1), cough (C2), Normal breathing after cough (C3), hold breath (C4), expiration (C5), movement (C6). We also consider a class other to regroup all other activities (C7). Figure 1 illustrates an example of the BCG signal. The different human body activities are plotted in different colors. The objective is to highlight the differences in the BCG signal according to the activity. BCG Signal Analysis In this subsection we inspect the effect of different activities in the BCG signal. Figure 2 illustrates five different activities. In the plot illustrating Normal respiration (C1), we can easily extract both the HBR and RR, the big period corresponds to the movements of the thoracic cage. By extracting the distance between two consecutive peaks of this waveform, we can extract the RR. The small period appearing in the BCG signal represents the heart beats. The extraction of these little fluctuations results in the extraction of the HBR. The signals corresponding to the cough (C2) and movement (C6) are very similar. Both signals attain the upper limit of the acquisition equipment which is explained by the broad peaks in the BCG signals. We believe however that these broad peaks have different explanations. The peaks in the movement signal comes from the acceleration of the subject's body and the peaks during the cough comes from the reaction of the body after coughing. The post-cough normal breathing is corrupted and we can hardly find the peaks of the respiratory activity. The peaks are broader and that results in a lack of precision of HBR and RR. The holding breath BCG contains only cardiac information. The periodicity is clearly noticed which was not obvious in other activities. This analysis of the BCG signal's content motivates the use of an approach based on the characterization of the useful frames. This particular problem is complex and thus is demanding when it comes to the choice of the features with physical significance. In the next subsections, we will define the features and try to highlight the intuition behind each one of them. BCG Signal Feature Extraction A periodic signal can be represented as a sum of sine waves and thus the Fourier transformation of this particular signal will be spiky. This statement motivated the idea of using the following two features: Spectral Flatness Measure (SF M ) and Spectral Centroid (SC). Let x(n) be a BCG signal. The later is decomposed into frames of short duration. These frames should be long enough to carry information about the activity but not too long to avoid an overlap of two or more different activities. In the frequency domain, the short-term Fourier transform is calculated and its amplitude is extracted. It is denoted |X(m, k)|, where m is the frame index and k is the discrete frequency. Spectral Flatness Measure (SFM): The SFM, also known as Wiener entropy, is a signal processing measure used to describe the flatness of the spectrum of the signal [10,11]. The SF M is defined as the ratio of the geometric mean and arithmetic mean of the Fourier transforms. When the spectrum is flat (white noise signal), the resulting measure is close to 1. where k is the frequency bin index and N is the number of frequency bins. Spectral Centroid (SC): The SC indicates where the center of mass of the spectrum is located. where f (k) is the frequency in Hz of the bin k. .sf m(L)] T (L is the number of frames) is transformed into a time-series (equivalent to a signal) by overlapping and adding the sf m of each frame. Note that the latter is a constant vector, whose value is sf m and whose length is the frame size. The sf m signal and sc signal are then used for the purpose of classification. Activities Classification Respiratory activities classification has been performed using a K-nearest neighbors classifier. It is a non-parametric classification method which classifies a sample based on a plurality vote of its neighbors. The sample is assigned to the class most common among its K nearest neighbors (K is a positive integer) in terms of minimal distance. The algorithm adopted is Fine KN N which is the finest variation of KN N since it labels the new input with the same label as only one of its nearest neighbour K = 1. The evaluation of the algorithm as well as the classification results were conducted using k-fold cross validation with k = 5. Classification Evaluation The classification performance is evaluated in terms of true positive rate and positive predicted value. True Positive Rate: The performance of our model will mainly be measured using the confusion matrix [12]. Specifically T P R measures the proportion of detected positives from the actual positive in other terms T P R measures how sensitive your model is to the positive class. T P R i = true positives true positives + f alse negatives , (3) where i corresponds to the class (activity) of the subject (i = 1..7). The terms of the confusion matrix presented in Fig. 4 are defined as follows: where C is the number of classes, M ij is the number of predictions of class i that actually belongs to class j it is usually measured by comparing the test results to the ground truth. Positive Predictive Value: The proportion of the predictions made that are actually true and happened. P P V Highlights mostly how refined our model is and how frequent we have false alerts. P P V i = true positives true positives + f alse positives . (5) The terms of the confusion matrix presented in Fig. 5 are defined as follows: Feature Illustration This section illustrates the feature analysis and interpretation by providing the means of the SF M and SC for each activity. The BCG signal we are working with is the same displayed on Fig. 1. The mean values are given in Table 1. We note that the normal breathing mean value of the SF M is the lowest which confirms the periodicity hypothesis. The values of SF M and SC taken during the coughing portion (C2) as well as the movement portion (C6) are relatively high which further confirms the non-periodicity in the corresponding portions. For the post-cough breathing, we notice that, unlike the portion of normal breathing, the values of the descriptors are high and close to those during the movement and coughing activities which supports our choice to isolate these portions. The closest values of the descriptors to the ideal ones (those of the normal breathing) are the values recorded during the holding breath, this is due to the fact that the portions of holding breath are periodic and carry only the heart rhythm information. Figure 3 show the sf m signal of some activities. In the top right, the normal respiration phase is considered. The values of SF M are low as expected to be. This fact is due to the clear periodicity in that activity. The SF M of holding breath (down left plot) is pretty low and that is as well an expected result since there's the cardiac information in the holding breath activity. Both cough and movement activities (right plots) manifest big fluctuations in their respective sf m signal , this is due to the absence of periodicity in these signals. Classifier Evaluation Using the sf m signal and sc signal we obtained a classification rate of 94%. Figure 4 shows the TPR confusion matrix. We can observe that our model performs very well overall. The T P R value appears in the diagonal of the confusion matrix. Most of the errors are detected in the class C4, where 31% of the latter (corresponds to the class: holding breath) is predicted as normal respiration. This result is expected since the periodicity in the holding breath portions is present due to the cardiac activity. We observe as well a high confusion between the class cough (C2) and the class movement (C6). This is due to the fact that our predictors are well equipped to detect the existence of the periodicity in the portions. Figure 5 shows the PPV confusion matrix. We can observe the confusion between the classes C1 and C4 (7%) in terms of PPV this corresponds to a high false-alert rate (the majority of false alerts in the class: Normal respiration are recorded as Holding breath) which further confirms the similar periodicity hypothesis mentioned in the previous paragraph we can also pinpoint an alarming 13% with the class C5 which is due to the lack of the adopted features when it comes to differentiating between the highly non-periodic portions. Conclusion In this study we investigated the respiratory activity classification based on the BCG signal. We used a reconstructed time-series signal from the spectral flatness measure (SFM) and spectral centroid (SC) of the raw data. We obtained a classification rate of 94% which show the effectiveness of the proposed method. The supervised classification process, however, is demanding when it comes to data and computation. Treating the feature extraction process by generating a time series is a novelty which motivates the use of more sophisticated deeplearning algorithms such as the Long Short Term Memory LSTM as one of the most used Recurrent Neural Network RNN architectures in time series related problems.
2020-06-25T05:04:14.335Z
2020-05-31T00:00:00.000
{ "year": 2020, "sha1": "cf98a6b1ce47e14410daa22a8e4097ddbbacfafc", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-51517-1_7.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "cf98a6b1ce47e14410daa22a8e4097ddbbacfafc", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
81925814
pes2o/s2orc
v3-fos-license
Incidence of appendicitis and ovarian cyst among female patients presenting with acute abdomen at a tertiary care hospital Background: Physicians working in casualty are often confronted with acute abdomen and get much more bothered as the diagnosis is not easy. This is due to the fact that the etiology of acute abdomen is always much diversified and the classical findings are masked making the diagnosis difficult. Objective was to study incidence of appendicitis and ovarian cyst among female patients presenting with acute abdomen. Methods: A hospital based follow up study was carried out among 64 cases presented with acute abdomen to the casualty from January 2018 to August 2018. All necessary investigations were done to confirm the etiology of acute abdomen. The cases belonged to surgery and gynecology departments where they were operated. The patients were followed from admission in the casualty to the final outcome. The data was analyzed using proportions. Results: During the study period a total of 15413 patients were admitted in the casualty out of them, 64 (0.42%) presented with acute abdomen. Of these 64 cases, majority i.e. 35 (54.7%) were due to acute appendicitis; 13 cases were due to renal colic and eight cases each were due to ectopic pregnancy and ovarian cyst. The most common age group affected was 21-30 years (62.5%) followed by less than 20 years age (21.9%). 25% of the 64 cases had delayed wound healing and no other complications were reported. No death was recorded. Conclusions: Authors achieved excellent results as there were no major complications and no death was recorded. Thus, meticulous diagnosis and prompt treatment can save patient life and at the same time rate of complications can be reduced. INTRODUCTION Among all the causes with which the patients present to casualty, acute abdomen constitutes around 5-10% of all such cases making it an important clinical emergency entity. 1 Physicians working in casualty are often confronted with acute abdomen and get much more bothered as the diagnosis is not easy. This is due to the fact that the etiology of acute abdomen is always much diversified and the classical findings are masked making the diagnosis difficult. The causes can range from mild or negligent to such causes as they can threaten the life of the patient. The etiology may get referral to any department of the hospital like obstetrics and gynecology department for those having conditions like ovarian cyst or ectopic pregnancy etc, surgery department for those having acute appendicitis etc. Despite doing the thorough evaluation of the patients with acute abdomen, around 25% of the cases remain with non specific diagnosis. 2 The classical symptoms of acute abdomen are more pronounced among the young patients compared to the elderly patients. Among the elderly patients, instead of acute pain, they may present with pain of long duration. 3 Acute abdomen is commonly seen in the casualty admissions. Therefore, the casualty doctors must be aware of the most common causes of acute abdomen so that they are in a position to diagnose and treat and give best of the relief to the patients with acute abdomen who come to the casualty thus increasing the possibility of early discharge of such patients. 4 Pain in the right iliac fossa is the most common presentation among females of reproductive age group which is generally considered to be between 15-45 years. Not all of them will have appendicitis. Thus, careful evaluation can avoid unnecessary investigations as well as surgeries among such cases. In some cases only careful observation may be sufficient. 5 When the patients come to the casualty with complaint of acute abdomen and the physician finds that the patient is not having classical symptoms, then in such cases the best strategy is to observe the patient and wait. But in some cases when the patient is presenting with non specific symptoms, this strategy may risk the life of the patient. Such patients may develop hemorrhage, peritonitis or infertility. 6 As discussed above, it is quite difficult to have a proper diagnosis based on clinical signs and symptoms. But now days with the advents in the health care use of USG, CT scan etc can help improve diagnosis. 7 Hence present study was carried out to study incidence of appendicitis and ovarian cyst among female patients presenting with acute abdomen. METHODS Present study was hospital based follow up studies. Present study was carried out at Department of Surgery and Department of Obstetrics and Gynecology, Government Medical College, Mahabubnagar. The study was carried out from January 2018 to August 2018. Consent from patient and patient relative was obtained. Sample size During the study period a total of 15413 casualty admissions took place. Out of them 64 cases were of acute abdomen. All were studied as they consented to be part of this study and were found eligible for the present study. Methodology From January 2018 to August 2018 a total of 15413 patients were admitted in the casualty. Out of this, 64 cases presented with acute abdomen. All patients were thoroughly evaluated. Detailed clinical history and complete physical examination was carried out. All findings including name, age, sex etc were recorded. Complete blood picture, complete urine examination, X ray abdomen standing was carried out for all 64 patients. They were also subjected to ultrasonography and CT scan of the abdomen in doubtful cases. All patients undergone emergency surgeries based on the etiology in the departments of surgery and gynecology. Postoperative complications were noted. Once the patient was completely alright, she was discharged. The data was analyzed using proportions. RESULTS All eight months recorded almost similar admissions in the casualty ranging from minimum of 11.2% in the month of June 2018 to highest of 13.8% in the month of July 2018. A total of 15413 admissions took place during the study period (Table 1). Of the 64 cases with acute abdomen, majority i.e. 35 (54.7%) were due to acute appendicitis. This was followed by renal colic due to calculi in 13 (20.3%) cases. There was a total of 16 gynecology cases out of which 8 cases were due to ectopic pregnancy and 8 cases were due to ovarian cyst (Table 3). Thus, incidence of acute abdomen was most common in the young age group constituting a total of 84.4% of the cases (Table 4). All cases were operated in the surgery and Gynecology departments. Only 16 (25%) had delayed wound healing which was resolved with prompt care. No one was having any major complication. No death occurred. Thus, making the management of acute abdomen the most successful (Table 5). When etiology wise complications were analyzed, it was found that the complication rate of delayed wound healing was most common in ectopic pregnancy cases. It was nil in the cases of ovarian cyst as well as in the cases of renal calculi; while it was 25% in the cases of acute appendicitis (Table 6). DISCUSSION The incidence of acute abdomen was very low (0.41%). This low incidence may be due to the fact that only female patients were included in the present study. Around 84.4% of the cases were younger than 30 years in the present study. Chanana L et al, in their study also found that 55.6% of the cases were younger and they belonged to the age group of 15-40 years. 8 The authors also found that males were more affected than females. But the present study was done exclusively among females. In the authors study majority had acute pain abdomen history. More than half had sudden onset, dull type of the pain abdomen. Lower abdomen was the pain site in 45.8% of the cases. The authors found that the incidence of acute appendicitis was 10.6% while we found that it was high at 54.7%. The authors reported a death rate of 2.3% while it was nil in the present study. 8 The authors concluded that multiple diagnoses should be considered while dealing with acute abdomen cases. 8 Abbas SH et al, studied 286 cases who were admitted to the emergency department. 9 They applied logistic regression multivariate statistical analysis and found that certain factors like presence of guarding, vomiting, increased heart rate, increased count of white blood cells were predictors of the underlying morbidity. The authors concluded that in the absence of these signs, the patient may not require any interventions of either surgical or medical. 9 Morino M et al, studied similar to the present study 508 female patients. 10 They evaluated the need for laparoscopy compared to the careful observation and intervention only in certain indicated cases. They formed two similar groups and one group was operated while the other group was carefully observed. In the observation group, only 39.2% required surgery as the sings and symptoms warranted to do so. Acute appendicitis incidence was 30.1% which is lower than that we found. Recurrence of the abdominal pain was significantly more in the observation group than the lap group. But the authors concluded that observation is as good as surgical intervention. 10 Staniland JR et al, in their study of 600 patients of acute abdomen found that 66% had classical features of acute abdomen and the remaining were not found to have classical features of acute abdomen. 11 Thus, the authors concluded that clinical diagnosis may not be effective 30% of the cases. 11 Gajjar R et al, found that 52% were young patients which are similar to the finding of the present study. 12 63% were males. Sudden onset of the pain was seen in 64% of the cases. Generalized pain was present in 40% of the patients. Radiating pain was not seen in 80% of the cases. Nausea was present in 56% of the cases. Vomiting was present in the 42% of the cases. Urinary symptoms were seen in 18% of the cases. The gynecological cases comprised only 3% while in the present study this rate was 25%. The authors found that ureteric colic was most common while we found that acute appendicitis was the most common in 54.7% of the cases. 12 Rama Rao P et al, studied all female patients which is similar to the present study where we also studied all female patients. 13 They concluded that acute appendicitis was the most common etiological diagnosis which is similar to the findings of the present study where we also found that acute appendicitis was the most common etiological diagnosis. The authors also found that the most common age group affected was 21-30 years which is again similar to the findings of the present study where we also found that the most common age group affected was 21-30 years. 13 Caterino S et al, carried out a retrospective study among 450 cases with acute abdomen. 14 They tried to identify the most common causes of acute abdomen. They found that the acute appendicitis was the most common (16.4%) etiological diagnosis which is similar to the findings of the present study where we also found that acute appendicitis was the most common (54.7%) etiological diagnosis. But our rate was almost three times their rate. The authors reported a death rate of 4.2% whereas in our study no deaths occurred. 14 CONCLUSION Acute appendicitis was the most common cause of acute abdomen in the present study. Younger age group was most commonly affected. Some 25% of the cases were attributed to the gynecological etiology like ectopic pregnancy and ovarian cyst. Thus, among females younger age and acute appendicitis forms the most common group of acute abdomen and thus this study throws a considerable guiding light for the new casualty doctors.
2019-03-18T14:04:41.012Z
2018-11-26T00:00:00.000
{ "year": 2018, "sha1": "b68d934d605a9420d0719cd25ae6082b45f68857", "oa_license": null, "oa_url": "https://www.msjonline.org/index.php/ijrms/article/download/5545/4332", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "11948192d243d6aaaccda2a83ebfe6c03af5a045", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
139464567
pes2o/s2orc
v3-fos-license
Polypropylene Crystallisation in the Presence of Inorganic Additives The physical modification of polypropylene (PP) fibres with inorganic additives ensures more intense anchoring of PP fibres in constructional composites, which leads to great improvement of the function of PP fibres in relation to the transmission and absorption of deformation energy in the formation and loading of composites. This work focuses on the preparation of PP fibres modified with untreated and treated CaCO3 and SiO2 for constructional composites. It investigated the effect of inorganic additives on the thermal, thermo-mechanical and sorption properties of these fibres. Melting and crystallisation temperatures as well as the melting and crystallisation enthalpies of PP and modified PP fibres depend on the additives and conditions of preparation of PP fibres applied. A lower amount of inorganic additives improves and a higher amount of inorganic additives worsened the dimensional stability of the PP fibres observed. The addition of inorganic fillers increased the water vapour sorption of the modified PP fibres in comparison with the pure PP fibre. Introduction Unmodified short and long polypropylene (PP) fibres are standard materials used in constructional composites. The PP fibres used as reinforcement in construction composites induce the reduction of crack propagation, increase flexural and bending strength, and improve impact resistance. Their advantage is very good chemical resistance and low sensitivity to moisture [1]. On the other hand, their non-polar, hydrophobic physical and chemical inactive polyolephinic character does not allow the creation of chemical or physical intermolecular bonds between the concrete matrix and PP fibres and thus results in a low affinity of PP fibre to the cement matrix. The effects of PP fibres of various geometry on the compressive and bending strength of reinforced cement mortars were compared in [2]. PP fibres of various geometry do not affect the compressive strength, but a significant increase in the bending strength was observed for mortars reinforced with fibrillated fibres, a lower increase with a star cross section, and the increase in bending strength was very little for mortars reinforced with fibres of round shape. One of the disadvantages is mainly the decrease in the fibre's ability to absorb deformation energy during the loading of the concrete composite in flexure. The result is the increased release of fibres by detaching from concrete composites, instead of their deformation along with the composite matrix [3,4]. Many methods of the surface treatment of polymer fibres exist at present which improve the bond strength with concrete matrix [5][6][7][8][9]. It was demonstrated on the basis of knowledge from literature that the nanoparticles added to concretes as active mineral additives affect their nucleation centers in the formation of hydration products, which helps to increase the strength and durability of construction materials [9][10][11]. An increase in the effectiveness of a dispersed PP fibre reinforcement in a concrete composite for mechanical parameters in the presence of nano-SiO 2 was also presented in literature [11]. One of the possible modifications of PP fibres for the improvement of the chemical and physical interaction between the fibres and concrete matrix is modification with nanoparticles of inorganic fillers [12]. If the particles are incorporated in the fibre surface, then the interactions between the inorganic particles in the fibre surface and inorganic concrete matrix would cause an increase in significantly stronger bonds. Some knowledge from the application of PP fibres in concrete composites with nano-SiO 2 particles is interesting. It was shown that the physical and chemical effects of nanoparticles are helping to reduce the amount of water film around PP fibres, and that nanofillers decrease porosity at the fibre/matrix interface. Results also showed that nano-SiO 2 improves the mechanical parameters of fibre-concrete composite [11]. The other disadvantage of PP fibres is also their low elasticity module as well as the significantly higher density in comparison with fresh concretes, which causes the imperfect dispersion of fibres in the composite volume. Special hy- 31 FIBRES & TEXTILES in Eastern Europe 2019, Vol. 27, 2(134) drophilic treatments of PP fibres can decrease these undesirable effects and provide more complete dispersion in a water environment. The modification of PP fibre surfaces leads to a very significant increase in their water absorbency [12]. PP is relatively highly brittle, and the addition of inorganic nanoadditives (CaCO 3 , SiO 2 and TiO 2 ) also eliminates this property. The high agglomeration ability and worse dispersion of nanoparticles within the PP matrix is caused by the high surface energy of nanoparticles and the fact that improvement of their mechanical properties is very difficult. Therefore specific methods for the preparation of nanoparticle/polymer composites are used [13]. It was ascertained that even a small number of nanoparticles results in an effective increase in the flexural and bending strength, elastic module, and rigidity [14,15]. Inorganic nanoadditives, as one of many additives (e.g. pigments [16], carbon nanotubes [17], montmorillonite [18,19], wood or lignin fibres [20][21][22][23]) used for PP, commonly referred to as nucleating agents, affect the thermal and mechanical properties of filled PP significantly [24][25][26][27][28][29]. Nanoparticles acting as nucleating agents induce a change in the crystallisation behaviour of semicrystalline PP at crystallisation and increase the crystallisation rate and content of crystalline phase. Changes in the crystallisation behaviour of PP with inorganic nanoadditives are the cause of the final properties of PP fibres, e.g. mechanical and thermo-mechanical properties, sorption etc. [30]. Consequently the study of the crystallization behaviour of PP modified with nanoadditives is very difficult from the point of their preparation, required properties and further application. The crystallisation process of PP can be studied under isothermal or non-isothermal conditions [25,28,[31][32][33][34][35][36][37]. In this work, the effect of two types of inorganic additives on the thermal properties of various masterbatches, as well as on the thermal, thermo-mechanical and mechanical properties and sorption of water of modified PP fibres with and without stabilisation was studied. Materials used Isotactic polypropylene TATREN HT 1810 (PP) (Slovnaft Corporation, Slovakia) with the melt flow index MFI = 20.6 g/10 min, supplied by the Slovnaft Corporation, was used in the preparation of unmodified and modified PP fibres with inorganic additives. The polypropylene TATREN HT1810 was used for the preparation of short fibres for reinforced cement constructional composites. Two types of inorganic additives in the modified PP fibres were used: CaCO 3 (Ca) additive particles with the commercial name SOCAL U3 (Solvay Corporation, USA) with a diameter = 20 nm, free flowing density = 170 g·l -1 and specific surface = 70 m 2 ·g -1 ; and SiO 2 (Si) additive particles with the commercial name SIPERNAT 22S (Evonik Industries AG, Germany) with a diameter = 13.5 nm, free flowing density = 90 g·l -1 and specific surface = 190 m 2 ·g -1 . Acetone (AC) (Lachema, a.s., Czech Republic) with a density of 790 kg.m -3 and pimelic acid (PA) with a melting temperature of 376.1-378.1 K (Merck, Germany) were used for the treatment of inorganic additives. Preparation of PP Fibres modified with CaCO 3 or SiO 2 The CaCO 3 , SiO 2 and pimelic acid were dried in a vacuum at room temperature. Multiple blends of CaCO 3 , SiO 2 with PA and AC were used, respectively. at a concentrated ratio of CaCO 3 : PA : AC = 5 : 1 : 20 (CaP) and SiO 2 : PA : AC = 5 : 1 : 50 (SiP). These blends were mixed for 1 h with a mechanical stirrer in the water bath of a ultrasonicator. After treatment of the inorganic additives, acetone from the blends prepared was evaporated at room temperature for 6 h. PP/Ca or PP/Si modified fibres were prepared in two steps -preparation of PP/Ca, PP/CaP, PP/Si and PP/SiP masterbatches and of modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres. PP masterbatches were prepared with 5 and 10 wt% of inorganic additives. Isotactic PP was compounded with various amounts of untreated (CaCO 3 -Ca or SiO 2 -Si) and treated (CaP or SiP) inorganic additives in a twin-screw extruder from LabTech Engineering Company Ltd., with a diameter of φ = 16 mm at an extrusion temperature of 513 K. The extrudate was the cooled and pelletised. The pellets of 10 wt% masterbatches were mechanically mixed with the PP so that mixtures with required concentration of additives in the modified PP/Ca, PP/CaP, PP/Si or PP/SiP fibres (1, 3 and 5 wt% additive in fibres) were obtained. Undrawn modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres were prepared from individual mixtures by classical spinning using a laboratory pilot line with a single screw extruder of φ = 16 mm, equipped with a nozzle containing 13 holes, at a spinning temperature of 523 K with a fibre take-up speed of 150 m.min -1 . The undrawn fibres were drawn using laboratory equipment at 395 K with the maximum drawing ratio. Characteristics of the unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres prepared are summarised in Table 1. 10 3.1 K the samples were isothermally held for 5 minutes and cooled down to the temperature afterwards at a rate of 200 K.min -1 . The dependencies of heat flow ained that were used for determination of the crystallisation enthalpy and stallisation were evaluated using the Avrami equation. al analysis of the unmodified PP and modified PP/Ca, PP/CaP, PP/Si and rbatches and fibres was performed. A sample of the original fibre was heated to 493.1 K at a rate of 10 K.min -1 . At this temperature (493.1 K) the sample was eld for 5 minutes to remove the thermal history of the fibre preparation. Then as cooled from 493.1 K to 323.1 K at a rate of 10 K.min -1 (plus 5, 20, 30 and 50 masterbatches). Subsequently the sample was exposed to a second heating to 473.1 K at a rate of 10 K.min -1 . All measurements were carried out in a sphere. From the melting endotherms of the 1 st and 2 nd heating, the melting (T m ) and enthalpies (H m ) as well as the crystallisation temperatures (T c ) and H c ) from the crystallisation exotherms were determined. The actual melting 0 is related to the enthalpy of the weight fraction w of PP 100 = exp ( 1 ) Methods used for characterisation where, w is the weight fraction of PP in the modified PP samples. The crystallinity (β) of PP in the unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP masterbatches and fibres was calculated from the Equation (2) given below: echanical properties echanical characteristics of the unmodified PP and modified PP/Ca, PP/CaP, PP/SiP fibres were measured by Schimadzu TMA-50 (Japan) equipment. The on (extension or shrinkage -l, %) and temperature of the first distortion of the ) were measured. The fibre was heated in the temperature range from room re to 383K, at a heating rate of 5 K.min -1 , fibre length of 9.8 mm and at a constant dependence of the dimensional stability of the fibre on the temperature was from which the thermo-mechanical characteristics were determined. The on (shrinkage -l, %) and temperature of the first distortion of the fibres (T D ) were d with errors of 0.5 or 1.8 %. (2) where, ΔH PP is the experimental melting enthalpy of PP in unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP masterbatches and fibres, and ΔH o PP is the melting enthalpy of completely crystalline PP. This is a theoretical value obtained from literature (209 J·g -1 ). Mechanical properties The tenacity (σ), elongation (ε) at break and Young's modulus (E), representing the mechanical properties of unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres, were evaluated by Instron 3343 equipment (USA). Measuring conditions were as follows: the clamping length of fibre -125 mm and rate of clamp -500 mm·min -1 . Mechanical characteristics of the PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres observed were determined in accordance with the standard (Standard ISO 2062:1993). The thermal parameters were determined with an error of 1.5%. Thermo-mechanical properties Thermo-mechanical characteristics of the unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres were measured by Schimadzu TMA-50 (Japan) equipment. The deformation (extension or shrinkage -∆l, %) and temperature of the first distortion of the fibres (T D ) were measured. The fibre was heated in the temperature range from room temperature to 383 K, at a heating rate of 5 K·min -1 , fibre length of 9.8 mm and at a constant load; the dependence of the dimensional stability of the fibre on the temperature was obtained, from which the thermo-mechanical characteristics were determined. The deformation (shrinkage -∆l, %) and temperature of the first distortion of the fibres (T D ) were determined with errors of 0.5 or 1.8%. Water vapour sorption Gravimetric analysis was used for evaluation of the water vapour sorption (WVA) of the unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres. Samples were dried at 353.1 K for 1 h in a drying chamber. Next the samples were put into glass vessels with a saturate solution of NH 4 NO 3 (relative humidity above this solution was65% at 293.1 K) for 96 h. After this period, the samples were weighed. Afterwards they were dried in a drying chamber at 378.1 K for 3 h and weighed again. The content of water vapour (C WV ) sorption was calculated using Equation (3): where, w is the weight fraction of PP in the modified PP samples. The crystallinity (β) of PP in the unmodified PP and modified PP/Ca, PP/SiP masterbatches and fibres was calculated from the equation g = PP PP where, ΔH PP is the experimental melting enthalpy of PP in unmodified PP/Ca, PP/CaP, PP/Si and PP/SiP masterbatches and fibres, and ΔH enthalpy of completely crystalline PP. This is a theoretical value obta J.g -1 ). Mechanical properties The tenacity (), elongation () at break and Young's modulu mechanical properties of unmodified PP and modified PP/Ca, PP/ fibres, were evaluated by Instron 3343 equipment (USA). Measu follows: the clamping length of fibre -125 mm and rate of clamp -50 characteristics of the PP and modified PP/Ca, PP/CaP, PP/Si and were determined in accordance with the standard (Standard ISO 2 parameters were determined with an error of 1.5 %. Thermo-mechanical properties Thermo-mechanical characteristics of the unmodified PP and mo PP/Si and PP/SiP fibres were measured by Schimadzu TMA-50 deformation (extension or shrinkage -l, %) and temperature of fibres (T D ) were measured. The fibre was heated in the temper temperature to 383K, at a heating rate of 5 K.min -1 , fibre length of 9 load; the dependence of the dimensional stability of the fibre o obtained, from which the thermo-mechanical characteristics deformation (shrinkage -l, %) and temperature of the first distortio determined with errors of 0.5 or 1.8 %. Water vapour sorption Gravimetric analysis was used for evaluation of the water vapou unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fib at 353.1 K for 1 h in a drying chamber. Next the samples were put saturate solution of NH 4 NO 3 (relative humidity above this solution wa h. After this period, the samples were weighed. Afterwards they chamber at 378.1 K for 3 h and weighed again. The content of wat was calculated using equation (3): where, m is the weight of the fibre with water vapour sorption in a e h) and m o that of the fibre after drying. The water vapour sorption error of 1.2 %. Results and Discussion Isothermal crystallisation kinetics A significant amount of heat is released in the crystallisation of PP released during the crystallisation allows for determination of crystallinity (X(t)) in the given period of time. The isothermal cryst masterbatches modified with CaCO 3 or untreated SiO 2 or SiO 2 treat performed at 403.1 K for the evaluation of crystallisation kinetics u [25]. The most commonly used analysis of the isothermal crystallisa using the Avrami method. This method describes the dependence crystallinity (X (t)) on the crystallisation time (t): where, m is the weight of the fibre with water vapour sorption in a equilibrium state (after 96 h) and m o that of the fibre after drying. The water vapour sorption was determined with an error of 1.2%. Results and discussion Isothermal crystallisation kinetics A significant amount of heat is released in the crystallisation of PP from melting. The heat released during the crystallisation allows for determination of the relative degree of crystallinity (X(t)) in the given period of time. The isothermal crystallisation of PP and PP masterbatches modified with CaCO 3 or untreated SiO 2 or SiO 2 treated with pimelic acid was performed at 403.1 K for the evaluation of crystallisation kinetics using the Avrami method [25]. The most commonly used analysis of the isothermal crystallisation of PP is evaluation using the Avrami method. This method describes the dependence of the relative degree of crystallinity (X (t)) on the crystallisation time (t): PP/SiP masterbatches and fibres was calculated = PP PP where, ΔH PP is the experimental melting enthalp PP/Ca, PP/CaP, PP/Si and PP/SiP masterbatch enthalpy of completely crystalline PP. This is a th J.g -1 ). Mechanical properties The tenacity (), elongation () at break an mechanical properties of unmodified PP and m fibres, were evaluated by Instron 3343 equip follows: the clamping length of fibre -125 mm a characteristics of the PP and modified PP/Ca, were determined in accordance with the stand parameters were determined with an error of 1.5 Thermo-mechanical properties Thermo-mechanical characteristics of the unm PP/Si and PP/SiP fibres were measured by S deformation (extension or shrinkage -l, %) a fibres (T D ) were measured. The fibre was he temperature to 383K, at a heating rate of 5 K.m load; the dependence of the dimensional sta obtained, from which the thermo-mechanic deformation (shrinkage -l, %) and temperatur determined with errors of 0.5 or 1.8 %. Water vapour sorption Gravimetric analysis was used for evaluation unmodified PP and modified PP/Ca, PP/CaP, P at 353.1 K for 1 h in a drying chamber. Next the saturate solution of NH 4 NO 3 (relative humidity a h. After this period, the samples were weighe chamber at 378.1 K for 3 h and weighed again was calculated using equation (3): where, m is the weight of the fibre with water va h) and m o that of the fibre after drying. The wa error of 1.2 %. Results and Discussion Isothermal crystallisation kinetics A significant amount of heat is released in the released during the crystallisation allows for crystallinity (X(t)) in the given period of time. T masterbatches modified with CaCO 3 or untreate performed at 403.1 K for the evaluation of crys [25]. The most commonly used analysis of the i using the Avrami method. This method describ crystallinity (X (t)) on the crystallisation time (t): = 1 kt or or PP/SiP masterbatches and fibres was calculated from the equation given = PP PP where, ΔH PP is the experimental melting enthalpy of PP in unmodified PP PP/Ca, PP/CaP, PP/Si and PP/SiP masterbatches and fibres, and ΔH o P enthalpy of completely crystalline PP. This is a theoretical value obtained J.g -1 ). Mechanical properties The tenacity (), elongation () at break and Young's modulus (E mechanical properties of unmodified PP and modified PP/Ca, PP/CaP fibres, were evaluated by Instron 3343 equipment (USA). Measuring follows: the clamping length of fibre -125 mm and rate of clamp -500 m characteristics of the PP and modified PP/Ca, PP/CaP, PP/Si and PP were determined in accordance with the standard (Standard ISO 2062 parameters were determined with an error of 1.5 %. Thermo-mechanical properties Thermo-mechanical characteristics of the unmodified PP and modifi PP/Si and PP/SiP fibres were measured by Schimadzu TMA-50 (Jap deformation (extension or shrinkage -l, %) and temperature of the fibres (T D ) were measured. The fibre was heated in the temperatur temperature to 383K, at a heating rate of 5 K.min -1 , fibre length of 9.8 m load; the dependence of the dimensional stability of the fibre on th obtained, from which the thermo-mechanical characteristics were deformation (shrinkage -l, %) and temperature of the first distortion o determined with errors of 0.5 or 1.8 %. Water vapour sorption Gravimetric analysis was used for evaluation of the water vapour so unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres. at 353.1 K for 1 h in a drying chamber. Next the samples were put into saturate solution of NH 4 NO 3 (relative humidity above this solution was65 h. After this period, the samples were weighed. Afterwards they we chamber at 378.1 K for 3 h and weighed again. The content of water v was calculated using equation (3): where, m is the weight of the fibre with water vapour sorption in a equili h) and m o that of the fibre after drying. The water vapour sorption was error of 1.2 %. Results and Discussion Isothermal crystallisation kinetics A significant amount of heat is released in the crystallisation of PP fro released during the crystallisation allows for determination of the crystallinity (X(t)) in the given period of time. The isothermal crystallis masterbatches modified with CaCO 3 or untreated SiO 2 or SiO 2 treated w performed at 403.1 K for the evaluation of crystallisation kinetics using [25]. The most commonly used analysis of the isothermal crystallisation using the Avrami method. This method describes the dependence of th crystallinity (X (t)) on the crystallisation time (t): where, n is the Avrami exponent, which is a function of the nucleation process, and k the growth function, which is dependent on the nucleation and crystal growth. If the Avrami index n is about 3, then heterogeneous nucleation with tridimensional spherulitic growth in a spherical form is characteristic of polymer crystallisation [38]. Primary crystallisation is characterised by the prevalence of nuclei and relatively faster growth of lamellar crystals, while a secondary crystallisation process is the result of the crystallisation of a component with different modification and/or by an increment in the perfection of existing crystallites [38,39]. If the value of n is about 2, then it is possible Avrami index n of untreated SiO 2 mainly at a lower SiO 2 concentration in masterbatches, while the Avrami index of PP masterbatches with treated SiO 2 is higher than that of pure PP. The Avrami values (n, k, K and t 1/2 ) confirm the increased nucleation effectiveness of treated CaCO 3 and untreated SiO 2 for PP crystallization in the form of bidimensional crystals in comparison with the untreated CaCO 3 and treated SiO 2 . The secondary nucleation Laurtintzen-Hoffman theory [25,36,37] was used for the calculation of  and  e, which are the side surface (lateral) and fold surface (end) free energies, which are a measure of the work required to create a new surface, as well as for the calculation of nucleation activity () (Figures 2 & 3 and Table 2). The addition of inorganic additives to PP causes a decrease in the free energy in crystal creation as opposed to that in unmodified PP. In PP crystallisation in the presence of inorganic additives, the creation of crystallites is faster and simpler in comparison with pure PP. The nucleation activity () can take values between 0 and 1. If the value of nucleation activity is equal to 1, then the inorganic additive is inactive in the crystallisation of PP; conversely, if the value is equal to 0, then the inorganic additive is extremely active. The treatment of inorganic additives (CaCO 3 , SiO 2 ) with pimelic acid increases the nucleation activity of these additives for PP crystallisation ( Table 2). Higher nucleation activity is observed at their higher content in the PP masterbatches. The secondary nucleation Laurtintzen-Hoffman theory [25,36,37] was used for the calculation of  and  e, which are the side surface (lateral) and fold surface (end) free energies, which are a measure of the work required to create a new surface, as well as for the calculation of nucleation activity () (Figures 2 & 3 and Table 2). The addition of inorganic additives to PP causes a decrease in the free energy in crystal creation as opposed to that in unmodified PP. In PP crystallisation in the presence of inorganic additives, the creation of crystallites is faster and simpler in comparison with pure PP. The nucleation activity () can take values between 0 and 1. If the value of nucleation activity is equal to 1, then the inorganic additive is inactive in the crystallisation of PP; conversely, if the value is equal to 0, then the inorganic additive is extremely active. The treatment of inorganic additives (CaCO 3 , SiO 2 ) with pimelic acid increases the nucleation activity of these additives for PP crystallisation ( Table 2). Higher nucleation activity is observed at their higher content in the PP masterbatches. The secondary nucleation Laurtintzen-Hoffman theory [25,36,37] was used for the calculation of  and  e, which are the side surface (lateral) and fold surface (end) free energies, which are a measure of the work required to create a new surface, as well as for the calculation of nucleation activity () (Figures 2 & 3 and Table 2). The addition of inorganic additives to PP causes a decrease in the free energy in crystal creation as opposed to that in unmodified PP. In PP crystallisation in the presence of inorganic additives, the creation of crystallites is faster and simpler in comparison with pure PP. The nucleation activity () can take values between 0 and 1. If the value of nucleation activity is equal to 1, then the inorganic additive is inactive in the crystallisation of PP; conversely, if the value is equal to 0, then the inorganic additive is extremely active. The treatment of inorganic additives (CaCO 3 , SiO 2 ) with pimelic acid increases the nucleation activity of these additives for PP crystallisation ( Table 2). Higher nucleation activity is observed at their higher content in the PP masterbatches. The secondary nucleation Laurtintzen-Hoffman theory [25,36,37] was used for the calculation of  and  e, which are the side surface (lateral) and fold surface (end) free energies, which are a measure of the work required to create a new surface, as well as for the calculation of nucleation activity () (Figures 2 & 3 and Table 2). The addition of inorganic additives to PP causes a decrease in the free energy in crystal creation as opposed to that in unmodified PP. In PP crystallisation in the presence of inorganic additives, the creation of crystallites is faster and simpler in comparison with pure PP. The nucleation activity () can take values between 0 and 1. If the value of nucleation activity is equal to 1, then the inorganic additive is inactive in the crystallisation of PP; conversely, if the value is equal to 0, then the inorganic additive is extremely active. The treatment of inorganic additives (CaCO 3 , SiO 2 ) with pimelic acid increases the nucleation activity of these additives for PP crystallisation ( Table 2). Higher nucleation activity is observed at their higher content in the PP masterbatches. to indicate the growth of bidimensional crystallites. Polymer crystallisation with bidimensional crystal growth in the form of a disk is predicted. On the bases of plots of log[-ln(1-X(t))] vs. log t (Equation (4)) for isothermal crystallisation at 403.1 K of PP and modified PP/Ca, PP/CaP, PP/Si or PP/SiP masterbatches, Avrami values n and k, K were calculated (Figure 1 and Table 2 The secondary nucleatio Laurtintzen--Hoffman theory [25,36,37] was used for the calculation of σ and σ e , which are the side surface (lateral) and fold surface (end) free energies, which are a measure of the work required to create a new surface, as well as for the calculation of nucleation activity (ϕ) (Figures 2 & 3 and Table 2). The addition of inorganic additives to PP causes a decrease in the free energy in crystal creation as opposed to that in unmodified PP. In PP crystallisation in the presence of inorganic additives, the creation of crystallites is faster and simpler in comparison with pure PP. The nucleation activity (ϕ) can take values between 0 and 1. If the value of nucleation activity is equal to 1, then the inorganic additive is inactive in the crystallisation of PP; conversely, if the value is equal to 0, then the inorganic additive is extremely active. The treatment of inorganic additives (CaCO 3 , SiO 2 ) with pimelic acid increases the nucleation activity of these additives for PP crystallisation ( Table 2). Higher nucleation activity is observed at their higher content in the PP masterbatches. (Tables 3, 4). Results of the thermal properties of anisotropic pure PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres obtained in the first heating show that the creation of these fibres' supermolecular structure (expressed as crystallinity β) is simultaneously affected by preparation conditions (spinning and drawing) as well as the presence of inorganic additives. Several peaks corresponding to the melting temperatures of various crystal modifications created in the preparation of these fibres were obtained. Modifications with the melting temperature corresponding to a blend α-β-modification (T m1 , T m2 = 429.1-436.1 K) that correspond to the melting temperature of α-modification (T m3 = 438.1-446.1 K) were also obtained. A higher melting temperature of the α-modification created in the modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres points towards the formation of more perfect crystals in the presence of untreated and treated inorganic additives. These additives increase the crystallisation ability of PP in modified fibres in comparison with pure PP. The resultant samples of pure PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres obtained in the second heating describe the supermorphological structure created without strength in the spinning and drawing processes. The structure created in the cooling of isotropic pure PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP systems after the first heating and isothermal holding for 5 minutes to remove the thermal his-tory of the fibre preparation only includes the effect of inorganic additives on PP crystallisation (Tables 5, 6). The addition of inorganic additives encourages the creation of crystal modifications with various melting temperatures; however, the crystal modifications with higher melting temperatures did not create multipeaks. Changes in the creation of the supermolecular structure towards thermal behaviour of PP in the PP/Ca, PP/CaP, PP/Si and PP/SiP systems are affected by the presence of inorganic additives, but mainly by the spinning and drawing conditions. The supermolecular structure of fibre created in the spinning and drawing processes is the basis for their thermo-mechanical, mechanical and other useful properties. The tenacity at break (σ) and Young's modulus (E) decrease with an increase in the additive content in the fibres. A significant decrease is observed mainly at a content from 3 wt % of additives in modified PP fibres (Figure 4). The addition of 1 wt % untreated SiO 2 causes the insignificant growth of Young's modulus (E). The higher Young's modulus (E) of modified PP/Si fibres was caused by inorganic fillers, which act as a reinforcement additive. On the contrary, the inorganic filler induced a decrease in elongation of the modified PP fibre because the particles of the filler decrease the drawing ability of the polymer matrix ( Table 7). This confirms the theoretical knowledge that the addition of micronised filler to the oriented anisotropic polymer matrix decreases its mechanical properties. The tenacity at break (σ) and Young's modulus (E) decrease with an increase in the additive content in the fibres. A significant decrease is observed mainly at a content from 3 wt % of additives in modified PP fibres (Figure 4). The addition of 1 wt % untreated SiO 2 causes the insignificant growth of Young's modulus (E). The higher Young's modulus (E) of modified PP/Si fibres was caused by inorganic fillers, which act as a reinforcement additive. On the contrary, the inorganic filler induced a decrease in elongation of the modified PP fibre because the particles of the filler decrease the drawing ability of the polymer matrix ( Table 7). This confirms the theoretical knowledge that the addition of micronised filler to the oriented anisotropic polymer matrix decreases its mechanical properties. The tenacity at break (σ) and Young's modulus (E) decrease with an increase in the additive content in the fibres. A significant decrease is observed mainly at a content from 3 wt % of additives in modified PP fibres (Figure 4). The addition of 1 wt % untreated SiO 2 causes the insignificant growth of Young's modulus (E). The higher Young's modulus (E) of modified PP/Si fibres was caused by inorganic fillers, which act as a reinforcement additive. On the contrary, the inorganic filler induced a decrease in elongation of the modified PP fibre because the particles of the filler decrease the drawing ability of the polymer matrix ( Table 7). This confirms the theoretical knowledge that the addition of micronised filler to the oriented anisotropic polymer matrix decreases its mechanical properties. The tenacity at break (σ) and Young's modulus (E) decrease with an increase in the additive content in the fibres. A significant decrease is observed mainly at a content from 3 wt % of additives in modified PP fibres (Figure 4). The addition of 1 wt % untreated SiO 2 causes the insignificant growth of Young's modulus (E). The higher Young's modulus (E) of modified PP/Si fibres was caused by inorganic fillers, which act as a reinforcement additive. On the contrary, the inorganic filler induced a decrease in elongation of the modified PP fibre because the particles of the filler decrease the drawing ability of the polymer matrix ( Table 7). This confirms the theoretical knowledge that the addition of micronised filler to the oriented anisotropic polymer matrix decreases its mechanical properties. (Figure 5a). For the improvement of PP fibre adhesion to the concrete matrix, the water vapour sorption (WVS, %) of CaCO 3 and SiO 2 of unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres (Figure 5.b) was evaluated. The water vapour sorption of unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres increases with an increase in filler concentration. The fibres prepared with inorganic additives treated with pimelic acid and those treated mainly with CaCO 3 showed a more significant increase in water vapour sorption. Conclusions In this work, crystallisation kinetics evaluated on the basis of the Avrami and Laurtintzen-Hoffman methods of isotropic PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP systems were investigated. The thermal, thermo-mechanical and mechanical properties and water vapour sorption of anisotropic PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres were studied as well. The Avrami values (n, k, K and t 1/2 ) obtained for the PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP masterbatches confirm the increased nucleation effectiveness of treated CaCO 3 and untreated SiO 2 for PP crystallisation in the form of bidimensional crystals in comparison Figure 5. Dependencies of the heat distortion of the fibres (T D ) (a) and water vapour sorption (WVS) (b) on the concentration (C A ) of CaCO 3 and SiO 2 of unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres The size stability of fibres is a very important parameter from the viewpoint of their behaviour for the load of individual fibres as well as those in the concrete. Therefore the thermo-mechanical characteristics (the extension resp. shrinkage of fibre l, % and the heat distortion of the fibres T D , K) were evaluated in the range from room temperature to 363 K, the results of which are presented in Table 7 and in Figure 5a. From the results obtained, the differences in the deformation of PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres were evident. The shrinkage of PP fibres is reduced by 2.54 %. The addition of untreated as well as treated CaCO 3 or untreated SiO 2 improves the size stability of the fibres prepared. Treated SiO 2 has either a null or negative effect on the size stability of the fibres prepared( Table 7). On the other hand, the addition of untreated and treated SiO 2 to PP fibres increases their heat distortion fibre (T D ) in comparison with pure PP fibres and PP fibres modified with untreated and treated CaCO 3 (Figure 5a). Wu HC, Li V.C. Fibre cement interface tailoring with plasma treatment. Cement and For the improvement of PP fibre adhesion?? to the concrete matrix, the water vapour sorption (WVS, %) of CaCO 3 and SiO 2 of unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres (Figure 5b) was evaluated. The water vapour sorption of unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres increases with an increase in filler concentration. The fibres prepared with inorganic additives treated with pimelic acid and those treated mainly with CaCO 3 showed a more significant increase in water vapour sorption. Conclusion In this work, crystallisation kinetics evaluated on the basis of the Avrami and Laurtintzen-Hoffman methods of isotropic PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP systems were investigated. The thermal, thermo-mechanical and mechanical properties and water vapour sorption of anisotropic PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres were studied as well. The Avrami values (n, k, K and t 1/2 ) obtained for the PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP masterbatches confirm the increased nucleation effectiveness of treated CaCO 3 and untreated SiO 2 for PP crystallisation in the form of bidimensional crystals in 47,3 -2,63 C A is the concentration of additives, X -CaCO 3 or SiO 2 , and XP is CaCO 3 or SiO 2 treated with pimelic acid a) b) Figure 5. Dependencies of the heat distortion of the fibres (T D ) (a) and water vapour sorption (WVS) (b) on the concentration (C A ) of CaCO 3 and SiO 2 of unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres The size stability of fibres is a very important parameter from the viewpoint of their behaviour for the load of individual fibres as well as those in the concrete. Therefore the thermo-mechanical characteristics (the extension resp. shrinkage of fibre l, % and the heat distortion of the fibres T D , K) were evaluated in the range from room temperature to 363 K, the results of which are presented in Table 7 and in Figure 5a. From the results obtained, the differences in the deformation of PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres were evident. The shrinkage of PP fibres is reduced by 2.54 %. The addition of untreated as well as treated CaCO 3 or untreated SiO 2 improves the size stability of the fibres prepared. Treated SiO 2 has either a null or negative effect on the size stability of the fibres prepared( Table 7). On the other hand, the addition of untreated and treated SiO 2 to PP fibres increases their heat distortion fibre (T D ) in comparison with pure PP fibres and PP fibres modified with untreated and treated CaCO 3 (Figure 5a). Wu HC, Li V.C. Fibre cement interface tailoring with plasma treatment. Cement and For the improvement of PP fibre adhesion?? to the concrete matrix, the water vapour sorption (WVS, %) of CaCO 3 and SiO 2 of unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres (Figure 5b) was evaluated. The water vapour sorption of unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres increases with an increase in filler concentration. The fibres prepared with inorganic additives treated with pimelic acid and those treated mainly with CaCO 3 showed a more significant increase in water vapour sorption. Conclusion In this work, crystallisation kinetics evaluated on the basis of the Avrami and Laurtintzen-Hoffman methods of isotropic PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP systems were investigated. The thermal, thermo-mechanical and mechanical properties and water vapour sorption of anisotropic PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres were studied as well. The Avrami values (n, k, K and t 1/2 ) obtained for the PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP masterbatches confirm the increased nucleation effectiveness of treated CaCO 3 and untreated SiO 2 for PP crystallisation in the form of bidimensional crystals in The mechanical properties of pure PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP PP fibres are shown in Table 7 and Figure 4. The tenacity at break (σ), Young's modulus (E) and elongation at break (ε) are affected by all additives, both untreated and treated CaCO 3 and SiO 2 . he tenacity at break (σ) and Young's modulus (E) decrease with an increase in the additive content in the fibres. A significant decrease is observed mainly at a content from 3 wt% of additives in modified PP fibres (Figure 4). The addition of 1 wt % untreated SiO 2 causes the insignificant growth of Young's modulus (E). The higher Young's modulus (E) of modified PP/Si fibres was caused by inorganic fillers, which act as a reinforcement additive. On the contrary, the inorganic filler induced a decrease in elongation of the modified PP fibre because the particles of the filler decrease the drawing ability of the polymer matrix ( Table 7). This confirms the theoretical knowledge that the addition of micronised filler to the oriented anisotropic polymer matrix decreases its mechanical properties. The size stability of fibres is a very important parameter from the viewpoint of their behaviour for the load of individual fibres as well as those in the concrete. Therefore the thermo-mechanical characteristics (the extension resp. shrinkage of fibre ∆l, % and the heat distortion of the fibres T D , K) were evaluated in the range from room temperature to 363 K, the results of which are presented in Table 7 and in Figure 5a. From the results obtained, the differences in the deformation of PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres were evident. The shrinkage of PP fibres is reduced by 2.54%. The addition of untreated as well as treated CaCO 3 or untreated SiO 2 improves the size stability of the fibres prepared. Treated SiO 2 has either a null or negative effect on the size stability of the fibres prepared ( Table 7). On the other hand, the addition of untreated and treat-with untreated CaCO 3 and treated SiO 2 . The treatment of inorganic additives (CaCO 3 , SiO 2 ) with pimelic acid increases the nucleation activity (from ϕ = 0,9-0,7 to ϕ = 0,7-0,4) of these additives for PP crystallisation. A higher nucleation activity is observed at their higher content in the PP masterbatches. Results of the thermal properties of anisotropic pure PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres obtained in the first heating show that the creation of these fibres' supermolecular structure (expressed as crystallinity β) is simultaneously affected by the presence of inorganic additives as well as by the preparation conditions (spinning and drawing). In the spinning and drawing of the fibres observed, the singular crystal modification of PP is mainly formed. The second heating of pure PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres characterises the supermorphological structure created without strength in the spinning and drawing processes after removing the thermal history of the fibre preparation. The addition of inorganic additives encourages the creation of crystal modifications with various melting temperatures , but the crystal modifications with higher melting temperatures did not create multi-peaks. The tenacity at break (σ) and Young's modulus (E) decrease with an increase inadditive content in the fibres mainly at a content from 3 wt% of additives in modified PP fibres. The inorganic fillers induced a decrease in the elongation of the modified PP fibres because the particles of the filler decrease the drawing ability of the polymer matrix. The results of thermo-mechanical properties obtained show that the differences in the deformation of PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres were evident. The addition of untreated as well as treated CaCO 3 or untreated SiO 2 improves the size stability of the fibres prepared, and treated SiO 2 has a either null or negative effect on their size stability. But the addition of untreated and treated SiO 2 to PP fibres increases their heat distortion fibre in comparison with pure PP fibres and PP fibres modified with untreated and treated CaCO 3 . The water vapour sorption of unmodified PP and modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres increases with an increase in filler concentration. The results of this work show that untreated and treated inorganic additives change the supermolecular structure, thermo-mechanical and mechanical properties, and water vapor sorption of modified PP/Ca, PP/CaP, PP/Si and PP/SiP fibres. Untreated CaCO 3 and SiO 2 improve the size stability and increase the crystallinity but decrease the water vapour sorption properties. Treated CaCO 3 or SiO 2 significantly improve the water vapour sorption properties in particular. The better adhesion of PP fibres modified with CaCO 3 or SiO 2 treated with pimelic acid to concrete matrix can be predicted.
2019-04-30T13:09:23.351Z
2019-04-30T00:00:00.000
{ "year": 2019, "sha1": "3e50ec6f3cd07db4e1e5a2abd58f940d946ae081", "oa_license": null, "oa_url": "https://doi.org/10.5604/01.3001.0012.9984", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fdc3ec0129708ab637e2cecd985944c6e11d87a7", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
55332123
pes2o/s2orc
v3-fos-license
Winding vector: how to annihilate two Dirac points with the same charge The merging or emergence of a pair of Dirac points may be classified according to whether the winding numbers which characterize them are opposite ($+-$ scenario) or identical ($++$ scenario). From the touching point between two parabolic bands (one of them can be flat), two Dirac points with the {\it same} winding number emerge under appropriate distortion (interaction, etc), following the $++$ scenario. Under further distortion, these Dirac points merge following the $+-$ scenario, that is corresponding to {\it opposite} winding numbers. This apparent contradiction is solved by the fact that the winding number is actually defined around a unit vector on the Bloch sphere and that this vector rotates during the motion of the Dirac points. This is shown here within the simplest two-band lattice model (Mielke) exhibiting a flat band. We argue on several examples that the evolution between the two scenarios is general. Introduction -There has been a recent growing interest for various physical systems exhibiting a multiband excitation spectrum with crossing points between the bands. This interest was boosted by the discovery of graphene, where the low energy spectrum is described by a 2D Dirac equation for massless Fermions, giving the name "Dirac point" to such linear crossing point. 1 In two dimensions, a band touching is a topological defect protected by time-reversal and inversion symmetries. Such a contact point is characterized by a winding number w (sometimes confused with a Berry phase 2 ) which describes the winding of the phase of the wave function when moving around this point in reciprocal space. Such singularities may emerge or disappear under variation of external parameters under the constraint that the sum of their winding numbers is conserved. 3,4 It has been shown that the merging (or emergence) of two Dirac points in 2D crystals is described by two "universal Hamiltonians" depending on the topological properties of the Dirac points that merge. 3,4 They correspond to the two scenarios for winding numbers (+1, −1) → 0 and (+1, +1) → +2. These two Hamiltonians can be written with the help of two Pauli matrices σ a , σ b (a, b ∈ x, y, z) and three parameters ∆, m, c or ∆, m a , m b : For the first Hamiltonian, the gapless phase with Dirac points corresponds to ∆ < 0. When ∆ ≥ 0, the Dirac points of opposite signs (w = ±1) have merged into a semi-Dirac spectrum (∆ = 0), linear in one direction, quadratic in the other and then a gap 2∆ > 0 opens. 5,6 The second Hamiltonian describes the nematic distortion of a quadratic band touching. 3,7-11 A finite value The two Dirac points emerging at the M point eventually merge at the X point into an asymmetric semi-Dirac spectrum. Bottom (efgh): Emergence and motion of a pair of Dirac points from a symmetric quadratic spectrum. The two Dirac points eventually merge into a semi-Dirac spectrum of the parameter ∆ splits this quadratic point into a pair of Dirac points of same charge along a direction which depends on sgn(∆). The total charge w = +2 being conserved, the contact is topologically stable and no gap opens. These Hamiltonians are "universal" in the sense that they provide a unique description of the merging of Dirac points, independent of its microscopic realizations. [12][13][14][15] An additional term proportional to the identity σ 0 may change the spectrum dramatically but does not change the geometric properties of the wave functions. A quadratic touching point with a flat band enters in the second category with an appropriate σ 0 term. The question that we pose in this letter is the fate of a pair of Dirac points emerging from a quadratic band arXiv:1804.00781v1 [cond-mat.mes-hall] 3 Apr 2018 crossing, when further distortion is applied. We find a surprising situation where a unique pair of Dirac points emerges from a quadratic touching point, and disappears as a semi-Dirac point with gap opening, therefore following the two different merging scenarios in the same physical system. Considering that this pair is unique, its emergence or merging necessary occurs at a time-reversal invariant momentum (TRIM) G/2 where G is a reciprocal lattice vector. 16 In the vicinity of these points, the Bloch Hamiltonian takes either either the form H ++ with the Pauli matrices (σ x , σ z ), or the form H +− with the matrices (σ x , σ y ) or (σ z , σ y ). This evolution poses a central question: How a pair of Dirac points may emerge or disappear following both scenarios (++) and (+−) while conserving the winding number in the first Brillouin zone? We show that this is possible by defining a winding number around a unit vector which rotates in pseudo-spin space. The resulting winding vectors (see later) of the two Dirac points are parallel at their emergence at the quadratic point, and antiparallel at their merging at the semi-Dirac point. Here, this evolution is described in the framework of a simple lattice model exhibiting a contact between a flat band and a quadratic band. This is a generic model for systems with more energy bands, like a deformed Kagome lattice 17 or a honeycomb lattice with p x -p y orbitals. 18 The latter was probed by a recent experiment with a polariton lattice of semiconducting micropillars. 19 The deformation of these bands under appropriate lattice distortion highlights the scenario described in this letter. 20 This experimental work is one of the main motivations for our present study. for δ = 0., 0.2, 1 and −π < kx, ky < π. When δ is finite, two Dirac points appear at the M point and merge at the X point (π/2, −π/2) when δ = 1. Staggered Mielke model -In order to study this problem on a concrete simple model, we consider the tightbinding Hamiltonian visualized in Fig.2. It has a checkerboard structure will all identical hopping terms t. First proposed by Mielke, this is the simplest two-band model exhibiting a flat band. 21 In addition, we consider the effect of a staggered on-site potential ∓V and we set δ = V /2t. From the original tight-binding Hamiltonian H, we introduce the Bloch Hamiltonian where R are the position of the Bravais lattice sites. It has the property H(k + G) = H(k) where G is a reciprocal lattice vector. Here we define 2t = −1 and the interatomic distance a is taken as a = 1. The Hamiltonian is written as Fig.3 shows the Mielke spectrum under application of the on-site staggered potential δ and its evolution between the two merging scenarios. When δ = 0, the spectrum consists in a dispersive band touching quadratically a flat band of energy −1 at the M point (0, π) (see Fig. 2 for the positions in the reciprocal lattice). When δ is finite, the flat band becomes dispersive in the energy range [−1−|δ|, −1+|δ|] and the upper band extends in the range The quadratic touching point splits into two Dirac points at the energy D = −1 + |δ|. Depending on the sign of δ, the Dirac points emerge along one edge or the other of the BZ (δ > 0 in the figures). When δ → ±1, these two Dirac points merge at the X or Y point (π/2, ∓π/2) and the spectrum at this merging is semi-Dirac (here asymmetric), as expected from general arguments discussed below. 5 For δ > 0, the full evolution of the spectrum along the merging line (the diagonal M-M', see Fig. 2), is plotted on the top Figs. 1.a-d. Since the geometric properties of this model do not depend on the identity term σ 0 , we concentrate on the symmetric part H s (k) (4) of the Hamiltonian. Its energy spectrum is symmetric and its evolution along the merging line upon application of the on-site staggered potential δ is depicted Figs.1.e-g. When δ = 0, the spectrum is quadratic and splits into two Dirac points when δ is finite, very much like the distortion of the quadratic touching in the bilayer graphene spectrum under strain. 9,22 Unlike the case of graphene bilayer where there are two quadratic points (in K and K') with opposite winding numbers w = ±2, here there is a single quadratic point in the BZ which occurs at a TRIM (M point). When δ → 1 the spectrum converges towards a semi-Dirac point, following the (+−) scenario for distorted graphene. [3][4][5][6] What is the nature of these Dirac points emerging from a flat band? What are their topological properties? To answer these questions, we now concentrate on the motion of the Dirac points along the M-M' line (k y = k x − π), and their merging at the X point. Emergence (+,+) at the M point -This situation arises when the parameter δ is close to 0. We expand the symmetric Hamiltonian near the M point located at the south of the BZ (Fig.2). Writing k = (0, −π) + q, it has the local form H M = 1 2 (q 2 y − q 2 x )σ x + (δ − q x q y )σ z . Since we will follow the motion of the Dirac points along the diagonal M-M', it will be convenient to use the rotated coordinates q = (q y + q x )/ √ 2 and q ⊥ = (q y − q x )/ √ 2 corresponding to the coordinates respectively along the merging axis and the perpendicular axis. We find This is the form of universal Hamiltonian H ++ (Eq. 2). Here unlike the case of bilayer graphene where the low energy Hamiltonian is written with the Pauli matrices (σ x , σ y ), this Hamiltonian is real and involves the matrices (σ x , σ z ). When δ = 0, the associated winding number is 2 around the σ y direction and the spectrum is locally quadratic (q) = ± 1 2 (q 2 + q 2 ⊥ ). Merging (+,-) at the X point -This situation arises when δ is close to 1 (a similar situation occurs at the Y point when δ is close to −1). Introducing again the coordinates q and q ⊥ and neglecting higher order terms, the symmetric Hamiltonian in the vicinity of the X point has the form which is the universal Hamiltonian H +− (Eq. 1) written here in (σ y , σ z ) space. It describes the merging of two Dirac points with winding ±1 around the σ x direction. When δ = 1, the two charges annihilate, there is no winding anymore and the spectrum is semi-Dirac (q) = ± 2q 2 ⊥ + q 4 /4. Following the moving Dirac points -We conclude from these two limits that, upon variation of the parameter δ > 0, the emergence and merging of Dirac points is described by two universal Hamiltonians, respectively H ++ and H +− , the first one involving pseudospin components (σ x , σ z ) and the other one having components (σ y , σ z ). This implies a continuous rotation in pseudospin space during the motion of the Dirac points. To follow this rotation, we now expand locally the Hamiltonian in the vicinity of the two moving Dirac points D, D' of coordinates k D x = arcsin where the velocities are given by We have introduced the unit vector u φ = cos k D y u x + sin k D y u y . Therefore for a given value of the parameter δ, the Hamiltonian H D in the vicinity of a Dirac is written is terms of only two Pauli matrices σ z and σ φ = σ · u φ . Around each Dirac point the normalized pseudo magnetic field h/| h| rotates along an equator of the Bloch sphere, whose orientation varies when moving the position of the Dirac points from the M point to the X point. Therefore we are led to define a "winding vector", perpendicular to this equator, and given by Fig. 4 shows the evolution of the winding vector from the M to the X point. At the M point, two Dirac points emerge from a quadratic touching, with identical winding vectors w = u y . They merge at the X point with opposite winding vectors ± u x (Fig.4). Near the M point both velocities vanish before the merging into a quadratic point. Near the X point, the velocity vanishes along the direction and stays finite along the ⊥ direction, announcing the merging into a semi-Dirac point. Finally we note that the motion of Dirac points between the TRIM is not necessarily along a straight line. It is in general determined by the vanishing of the off-diagonal element of H s (k). Multiband systems and experimental motivation -For pedagogical purpose, we have chosen to describe in detail a simple two-band problem. The scenario presented here is generic of more complex situations encountered in multiband spectra that we now briefly illustrate in the cases of a 3-band and a 4-band problem. First we consider a square variation of the Kagome lattice which is known to exhibit a flat band touching quadratically a dispersive band. 23 Under appropriate variation of hopping parameters, we find that a pair of Dirac points between the two lower bands emerges from the Γ point and merges at another TRIM, with a semi-Dirac spectrum preceding the opening of a gap, see Fig. 5. 17 Then we consider the spectrum of the honeycomb lattice with two orbitals p x , p y per site. It consists in four bands arranged as two bands similar to the p z bands of graphene sandwiched between two flat bands (Fig. 6). 18 In addition, a staggered potential opens a gap in the middle of the spectrum (like in boron nitride) and separates the two upper bands from the two lower bands. Each dispersive band touches a flat band at the Γ point. We then apply a uniaxial distortion similar to that done in artificial graphenes. 5 Fig. 6 shows that under this distortion two Dirac points emerge from the quadratic touching points (the upper one and the lowest one) therefore following the ++ scenario and that they ultimately merge at a TRIM following the +− scenario. Discussion and perspectives -In this letter, we have shown on a simple model, that a Dirac point is characterized by an integer and a direction on the Bloch sphere, leading to the notion of a winding vector rather than number. We summarize here the scenario: for two bands, a Bloch Hamiltonian H(k) = h(k) · σ involves three Pauli matrices and can be represented as a point on a Bloch sphere, i.e. the position of a normalized pseudo magnetic field h/| h|. In 2D, contact points between two bands are unstable unless a particular symmetry protects their existence. In such a case, locally around a contact point k D , the Hamiltonian H(k D + q) = h a (q)σ a + h b (q)σ b involves only two Pauli matrices σ a = u a (k D ) · σ and σ b = u b (k D ) · σ. The pseudo magnetic field is therefore restricted to move on an equator. The contact point is then characterized by the number of times the latter winds around the direction u a × u b perpendicular to the equator when encircling the contact point in reciprocal space. Most generally the winding vector is given by with tan ψ = h b (q)/h a (q) and the winding number | w| is a non-negative integer. For a Dirac point the winding number is 1. At a TRIM, it is 0 or 2, depending on the merging of the two Dirac points. The notion of a winding vector becomes crucial when its direction changes as the contact point moves in reciprocal space. In the present case, it solves the apparent paradox that a single pair of Dirac points is created with a total winding number of 2 (++ scenario : (+1, +1) → +2) and annihilated with a total winding number of 0 (+− scenario : (+1, −1) → 0). The situation has been studied here within the simplest two-band lattice model. It is universal in the sense that it properly describes the evolution of the crossing point between two bands in a multi-band system. It describes the structure of the Dirac points emerging from the touching with a flat band. Recent experiments in higher bands of an artificial graphene have successfully shown new pairs of Dirac points emerging from a flat band and whose evolution follows the mechanism described in this letter. 20 Given the universality of this mechanism it should be observed routinely in the many new condensed matter or 2D artificial structures exhibiting several bands in the excitation spectrum. In 3D, contact points between two bands are generic and do not need any symmetry pro-tection (see e.g. Weyl semi-metals). There is no winding vector in this case, since these topological defects are characterized by a charge (the wrapping number), which does not require any direction to be specified. 24 We acknowledge useful discussions with A. Amo, J. Bloch, A. Mesaros and M. Milićević. L.-K. L. is supported by Tsinghua University Initiative Research Programme and the 1000 Youth Fellowship China.
2018-04-03T01:52:48.000Z
2018-04-03T00:00:00.000
{ "year": 2018, "sha1": "d149627903c3eec6f4c67ba70c45e98e0542efae", "oa_license": null, "oa_url": "https://hal.archives-ouvertes.fr/hal-02325149/file/51_WindingVectorPRL2018.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d149627903c3eec6f4c67ba70c45e98e0542efae", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
240507291
pes2o/s2orc
v3-fos-license
THE RECONSTRUCTION OF THE PROTO-SEMITIC GENITIVE ENDING AND A SUGGESTION ON ITS ORIGIN The Proto-Semitic genitive ending on triptotic nouns is commonly reconstructed as *-im (unbound state)/*-i (bound state). In Akkadian, however, this case ending is long -ībefore pronominal suffixes. Since the length of this vowel is unexplained, I argue that it is original and that the Akkadian bound state ending normalized as -i should also be reconstructed as long *-ī, explaining its retention in word-final position. This form seems more original than Proto-West-Semitic *-i. Hence, the Proto-Semitic bound state genitive ending should also be reconstructed as *-ī. Through internal reconstruction supported by the parallel of kinship terms like *ʔab-um ‘father’, I arrive at a pre-Proto-Semitic reconstruction of the genitive ending as *-ī-m (unbound), *-ī (bound). This paper then explores a hypothetical scenario where the genitive ending *-ī is derived from the adjectivizing ‘nisbe’ suffix through reanalysis of adjectival constructions like *bayt-u śarr-ī ‘the/a royal house’ as construct chains with meanings like ‘the/a king’s house’. With the addition of mimation and the resultant vowel shortening, this yielded the Proto-Semitic construction with a genitive, *bayt-u śarr-im. The genitive case failed to develop with diptotic nouns because they did not take mimation and in the dual and plural because the nisbe adjective was derived from the uninflected (singular) noun stem; hence, these categories all retain the more original contrast between the nominative and an undifferentiated oblique case. INTRODUCTION Proto-Semitic can securely be reconstructed with a simple case system, 1 distinguishing three cases: nominative, genitive, and accusative. 2 Based on the usage of these cases in Akkadian, Classical Arabic, Ugaritic, and Geʿez, it is clear that the nominative mainly marked the subject and non-verbal predicate, the genitive marked the nomen rectum (the second member of a construct Studia Orientalia Electronica 9(1) (2021): 66-82 chain/ʔiḍāfah) and words governed by a preposition, 3 and the accusative marked verbal objects, together with a number of other uses. We can reconstruct multiple declension classes for Proto-Semitic, several of which are limited to certain numbers of the noun and adjective (dual, plural) while others were used for both singulars and broken plurals (i.e. plurals with different stems than the associated singulars). Despite the syntactic differentiation of three separate cases, most of these declensions have only two separate case endings, distinguishing the nominative from the genitive/accusative or oblique (see Al-Jallad & van Putten 2017): 4 As the triptotic ('three-case') declension is more common, having completely replaced the diptotic ('two-case') declension in many languages, it is usually taken as the norm. The question then arises why the other declensions all lack a distinction between the genitive and accusative. Based on the similarity to the relevant cases of the triptotic declension, it is sometimes asserted that the diptotic declension has generalized the accusative or that the plural declensions (and perhaps the dual) have generalized the genitive (e.g. Birkeland 1940: 48-52;Kuryłowicz 1951: 224;Kienast 2001: § 138.1;Hasselbach 2007). Alternatively, many scholars believe that the diptotic declension preserves an older state of affairs (e.g. Philippi 1871: 181;Diakonoff 1988: 60, 64;Hasselbach 2013: 325-326). The four non-triptotic declensions then continue an original nominative-oblique case system, while the triptotic declension innovated a distinction between genitive and accusative. The genitive case, in particular, shows a striking similarity in form and meaning to a derivational morpheme that we will refer to as the nisbe suffix. This suffix, characterized by a high front vowel or palatal glide, occurs in many if not all Semitic languages and derives adjectives from nouns, often indicating ancestry or geographic origin, as in Old Assyrian aššur-i-um 'Assyrian', zalp-āʾi-um 'from Zalpa' (Kouwenberg 2017: § 7.2.10); Biblical Hebrew yiśrʔēl-ī 'Israelite' (Joüon & Muraoka 2009: § 88Mg); Biblical Aramaic kaśd-āy 'Chaldaean' (Bauer & Leander 1927: § 51dʹʹʹʹ); Geʿez mədr-āwī 'earthly ', bāʕəl-āy 'rich' (Tropper 2002: § 42.155-156); and Classical Arabic ʕarab-iyy-'Arab(ic)' (Fischer 1972: § 65b). It has frequently been suggested that either the nisbe suffix derives from the genitive ending, or vice versa. Previous proposals, however, remain vague on the exact developments that led to the creation of either morpheme, or rely on unmotivated and unrealistic ad hoc sound changes. 5 Moreover, proposals that derive 3 For a recent overview of genitive constructions in Semitic, see Cohen 2019. 4 Suchard & Groen (2021 argue that the 'masculine' and 'feminine' plural declensions go back to a unified pre-Proto-Semitic plural declension of nominative *-u-, oblique *-i-. The quotation marks around 'masculine' and 'feminine' are meant to convey that not all nouns in the 'masculine' declension are syntactically masculine, nor are all those in the 'feminine' declension syntactically feminine. 5 The most explicit account I have found is given by Brockelmann (1908: § 245a), who appeals to an ad hoc apocope of the nisbe's case ending together with the loss of the nomen regens's definite article (which he reconstructs for Proto-Semitic): *hā-bayt-ū hā-malik-īy-ū 'the royal house' > *bayt-ū hā-malik-ī 'the king's house' (transcription adapted from the original). The phonetic rationale behind these deletions is unclear.̆̆̆ Studia Orientalia Electronica 9(1) (2021): 66-82 the nisbe suffix from the genitive ending leave unexplained why only the triptotic declension distinguishes between the genitive and accusative cases -while it is cross-linguistically normal for less basic categories like the dual and plural to show some case syncretism compared to the singular (see Schneider 2020: 15-22), the diptotic declension is used for singulars as well as broken plurals and is not obviously less basic than the triptotic declension. In this paper, I will re-examine the reconstruction of the Proto-Semitic genitive ending. Based on evidence from Akkadian, I will argue that while the traditional reconstruction of the unbound state ending as *-im is correct, the bound state ending should be reconstructed as *-ī instead of traditional *-i. For a precursor of Proto-Semitic, the unbound form may also have had a long vowel, *-ī-m. This increases the similarity between the genitive ending and the nisbe suffix, which will lead me to propose a very hypothetical yet concrete scenario of how the former may have been created through reanalysis of the latter. Besides explaining the origin of the triptotic genitive ending, the proposed developments will also account for the lack of a formal genitive-accusative distinction in the diptotic, dual, and plural declensions. THE PROTO-SEMITIC GENITIVE ENDING The reconstruction of the unbound state of the genitive ending is straightforward. The most relevant attested forms are Akkadian -im (von Soden 1995: § 63b); Classical Arabic -in (Fischer 1972: § 147); Ugaritic /-i/ (Tropper 2012: § 54.111); and Geʿez -ə, which was apocopated in the later reading tradition (Tropper 2002: § 42.42). 6 Taking into account the replacement of mimation by nunation in Arabic and the loss of mimation in Ugaritic and Geʿez, these forms all support a Proto-Semitic reconstruction of *-im. Unsurprisingly, virtually all scholars accept this reconstruction (e.g. recently Huehnergard 2019: 60; I am unaware of any competing reconstructions in the recent literature). In the nominative and the accusative, the bound state case ending is simply the unbound state ending minus mimation: *-um and *-am become *-u and *-a. 7 This would lead us to expect * -i for the genitive. This reconstruction is supported by Classical Arabic -i and Ugaritic /-i/ as well as Geʿez -ə-before pronominal suffixes (the Geʿez unsuffixed bound state ending -a does not inflect for case). In Akkadian, however, we find -ī-before pronominal suffixes, which is now generally taken to be a long vowel; it is frequently spelled plene, seems to count as a long syllable in poetry, and prevents following light syllables from undergoing syncope (Hecker 2000). 8 This ending is consistently found on nouns that are in the genitive, as in ina kašād-ī-ki 'on your (f.sg.) arrival'; ana amt-ī-ša 'for her maidservant'; and šar māt-ī-šunu 'the king of their (m.) land' (von Soden 1995: § 65a; examples taken from Huehnergard 2011: 86). 6 The reconstruction of a vocalic ending in Geʿez is mainly based on metrical texts treating forms like nəgūś(-ə) 'king (nom./gen.)' as trisyllabic; note that the Geʿez script does not distinguish between C and Cə signs, making forms like <n(ə)-gū-ś(ə)> ambiguous. Geʿez ə is the outcome of a merger between *i and *u; the ending -ə thus supports the reconstruction of the genitive ending with a short, high vowel, but does not unambiguously point to *-im. 7 Cf. Classical Arabic -u, -a (Fischer 1972: § 149); Ugaritic /-u/, /-a/ (Tropper 2012: § 55.22); the frequent (but confused) use of the nominative ending -u in construct in literary Old Babylonian (von Soden 1995: § 64a); and Geʿez -ə-, -a-before pronominal suffixes as well as uninflecting -a in the bound state before nouns (Tropper 2002: § 42.541). 8 Hecker also adduces the argument by Sommerfeld (1987) that in Old Akkadian, the genitive ending before suffixes is spelled with signs marking syllables with long /ī/ where this is distinguished from short /i/, but as Hasselbach (2005) convincingly shows, Old Akkadian spelling does not use different signs to mark vowel length, only vowel quality. The genitive ending contrasts with the nominative and accusative forms, which have separate markers, -ū-and -ā-respectively, in a small number of nouns like ab-'father'; in the remaining majority of nouns, both the nominative and the accusative are marked the same way, either by a short -a-or by zero, for example, ṭupp-a-ša 'her tablet (nom./acc.)'; kalab-ša 'her dog (nom./ acc.) ' (examples from Huehnergard 2011: 86-87). This shared Akkadian distribution of bound state endings before pronominal suffixes is summarized in Table 2. In Old Akkadian, the inflection of the bound state without pronominal suffixes is similar to that of the suffixed noun: a genitive ending -i is contrasted with an endingless nominative-accusative form (Hasselbach 2005(Hasselbach : 182, 2013. 9 Later dialects of Akkadian do not distinguish case in the unsuffixed bound state, but many nouns take a bound state ending -i. This ending is obligatory for some nouns ending in two consonants (e.g. libb-i 'heart of', umm-i 'mother of', qīšt-i 'gift of') and optional for some monosyllabic nouns (e.g. šar ~ šarr-i 'king of', qāt ~ qāt-i 'hand of', bēl ~ bēl-i 'master of'); other nouns take no ending but may insert an epenthetic vowel to resolve a word-final consonant cluster, as in kalab 'dog of' ← kalb-um 'dog (nom.)' (von can be understood as resulting from the tension between the construct chain-initial position of the nomen regens and the suffixal inflection typical of the older Semitic languages: as the word in the bound state is the head of the construct chain, it should normally be inflected to mark the case of the entire construct chain, but this causes the case-marking morphemes to appear in the middle of the construct chain, an unhappy position for suffixes. 11 As was noted above, the Akkadian genitive ending before suffixes was a long -ī-. Despite its conventional transcription as -i, the unsuffixed bound state ending may also historically reflect a long vowel (whether it was still phonetically long in historical Akkadian or not). This is supported by the bound states ab-i 'father of' and aḫ-i 'brother of' (Huehnergard 2011: 59), 12 which go back to the Proto-Semitic genitive forms *ʔab-ī and *ʔaḫ-ī with the lengthened case vowel that is characteristic of these nouns (cf. Wilson-Wright 2016; more on these below). These forms show both that -i is the normal Akkadian reflex of *-ī in this position and that in several words, at least, the genitive bound state ending was generalized to non-genitive positions (note that the -i of ab-i and aḫ-i can hardly be explained as an epenthetic vowel). Kouwenberg (2017: § 5.5.1.1) additionally notes that the bound state ending is occasionally spelled plene in Old Assyrian, although he calls this ending "doubtless[ly] short"; it is unclear to me what this strong conviction is based on, however. Either way, deriving the -i ending from historically long *-ī makes the paradigm more regular, as what was originally the genitive bound state ending can now be reconstructed for Proto-Akkadian as *-ī(-) in all positions, both word-internally and word-finally. This reconstruction also explains why -i is preserved word-finally, while the nominative and accusative endings *-u and *-a were largely lost in the bound state. As [i] is a less sonorous vowel than [a], it tends to be reduced more often and lengthened less frequently. 13 It is therefore unexpected that *-i would be retained if *-a was lost. If the unsuffixed (genitive) bound state ending derives not from *-i but from *-ī, however, it is clear why it was preserved while *-u and *-a were lost: only short vowels were deleted in this position, but long *-ī survived. To sum up, it seems likely that the Old Babylonian and Old Assyrian bound state ending -i was originally limited to the genitive, as in Old Akkadian. Like the genitive ending before suffixes, it was probably originally long, *-ī. Hence, the recent precursors of the Akkadian triptotic case endings may be represented as in Table 4. As we have seen, West Semitic points instead to a genitive bound state ending with a short vowel. As there is no unambiguous West Semitic evidence for a long case vowel in the regular triptotic inflection (thus excluding the nouns like *ʔab-'father'), the Proto-West-Semitic situation can be reconstructed as in Table 5. 14 Table 5 The triptotic case endings in Proto-West-Semitic The Proto-Semitic situation is generally assumed to be the same as that in Proto-West-Semitic (e.g. . The length of the Akkadian genitive bound state ending, which is commonly accepted in the form used before pronominal suffixes, would then be secondary. Yet no plausible motivation for secondary lengthening has been identified. If it is due to sound change, we might expect the nominative and accusative endings *-u and *-a to have undergone the same lengthening. Especially *a, being the most sonorous vowel, should be at least as susceptible to lengthening as *i based on crosslinguistic phonetic tendencies (cf. footnote 13 above). Yet as we have seen, the nominative and accusative endings are lost in the unsuffixed bound state, suggesting that they remained short. Before pronominal suffixes, *-a-is retained in non-genitive forms that have -i word-finally, for example libb-a-šu 'his heart (nom./acc.)', but without lengthening. The lack of lengthening is confirmed by the syncope in Old Assyrian forms like *šuqult-a-šina > šuqult-a-šna Studia Orientalia Electronica 9(1) (2021): 66-82 'their (f.) weight', *ṭupp-a-kunu > ṭupp-a-knu 'your (m.pl.) tablet' (Kouwenberg 2017: § 9.5.3; normalization mine); since syncope does not normally take place after heavy syllables, **šuqultā-šina and **ṭupp-ā-kunu would have remain unchanged (von Soden 1995: § 12). We do find long case vowels before suffixes in the feminine plural and masculine adjectival plural, which are -āt-ū/ī-and -ūt-ū/ī-before suffixes, respectively; compared to the corresponding unbound state endings -āt-um/im and -ūt-um/im, these appear lengthened (Huehnergard 2011: 85). 15 But these long vowels are most likely analogical with the substantival masculine plural ending -ū/ī-(von Soden 1995: § 65k). 16 Speakers interpreted the long vowels in forms like kunukk-ū-ša 'her seals (nom.)', il-ū-šunu 'their (m.) gods (nom.)', and dayyān-ī-kunu 'your (m.pl.) judges (gen./acc.)' as part of a distinct set of pronominal suffixes to be used on plural nouns. They then attached them wholesale to the plural base of feminine words and masculine adjectives, yielding forms like epš-ēt-ū-ša 'her deeds (nom.)', puḫr-āt-ī-kunu 'your (m.pl.) assemblies (gen./acc.)', and mīt-ūt-ī-šunu 'their (m.) dead (gen./acc.)'. 17 Analogy could hypothetically have lengthened the Akkadian genitive singular bound state ending in the same way, but then there is no reason why the nominative *-u-should have remained short. The length of the Akkadian genitive singular bound state ending thus remains unexplained. An alternative explanation would be that it is simply inherited. If we assume that the vowel length not secondary but actually continues a Proto-Semitic form with a long vowel, *-ī(-), we must then explain the short *-i(-) found in West Semitic. Fortunately, this is not difficult. Based on the alternation between *-um : *-u in the nominative and *-am : *-a in the accusative, analogically shortening the vowel of the bound genitive ending to *-i based on the unbound ending *-im would be trivial. The length of the bound genitive singular ending would thus appear to be old, giving us the Proto-Semitic reconstruction presented in Table 6 (identical to the Proto-Akkadian paradigm given in Table 4). PROTO-SEMITIC SHORTENING OF VOWELS IN CLOSED SYLLABLES AND THE GENITIVE ENDING When reconstructing linguistic stages before Proto-Semitic, we must largely rely on internal reconstruction (see Hock 1991: 532-555). The first step in internal reconstruction is positing that a certain irregularity or allomorphy observed in a language derives from a more regular Studia Orientalia Electronica 9(1) (2021): 66-82 situation. In the case of our genitive ending, reconstructing the bound state as *-ī has disturbed the regularity of the rule that derives the unbound state endings from the bound state endings by adding *-m. As no motivation for the lengthening of *-i to *-ī in this position can be found in Proto-Semitic either, let us assume that the paradigm in Table 6 goes back to a more regular one where in the genitive, too, the unbound state was formed by adding mimation to the bound state, as presented in Table 7. Table 7 The triptotic case endings in pre-Proto-Semitic The difference between the paradigms of Table 7 and Table 6 is that *-ī-m has been shortened to *-im. A priori, it is plausible for a long vowel to be shortened in a closed syllable; crosslinguistically, this is a very common change, and it must also be reconstructed for later precursors of Geʿez (Tropper 2002: § 37.3), Arabic (van Putten 2017a: 62-63), and Hebrew (Suchard 2019b: 139), among other languages. Assuming that this sound change took place in Proto-Semitic thus satisfies the condition that the developments postulated by internal reconstruction be natural. But we do not need to posit such a change for Proto-Semitic merely in order to explain the triptotic genitive ending. Supporting evidence comes from the kinship terms that were briefly mentioned above, 'father' and 'brother', as well as 'husband's father'. Based on their similar inflection in various West Semitic languages and Akkadian, these words can be reconstructed for Proto-Semitic as in Table 8. 18 hence, *ʔabw-um > *ʔab-um, but *ʔabw-u > ʔab-ū. 20 While this may be so, the phonetic rationale behind this conditioning is unclear: it can hardly be the case that the lengthening was blocked in closed syllables to leave the syllable's weight unchanged, as the sound change in the bound state clearly does change the syllable structure (from *ʔab.wu or *ʔa. bwu to *ʔa.bū, i.e. from CVC.CV or CV.CCV to CV.CVV). Hence, a consistent loss of the glide with compensatory lengthening in every context seems at least as plausible. This unconditioned lengthening may receive confirmation from pre-Proto-Semitic *pw-um > Proto-Semitic *p-ūm 'mouth', if the length in Aramaic forms like pūm-ā (with the mimation reanalyzed as part of the stem; e.g. Jastrow 1950 s.v.) is originalas it would appear to be, since no secondary cause for the vowel length is apparent. 21 If the loss of *w between a consonant and a vowel caused compensatory lengthening in closed syllables as well, this would initially have yielded the paradigms given in Table 9. Positing a Proto-Semitic sound change that shortened long vowels in closed syllables then turns both *ʔab-ū-m etc. into *ʔab-um etc. and the unbound genitive ending of other triptotic nouns, *-ī-m, into *-im, bringing us back to the Proto-Semitic paradigms of Tables 6 and 8. That the long vowel was preserved in *pūm 'mouth' may show that the shortening did not operate in monosyllables, or it may be due to a difference in stress. 22 We can thus reconstruct unbound *-ī-m/bound *-ī as an earlier form of the triptotic genitive ending. THE NISBE SUFFIX AS A POSSIBLE SOURCE OF THE GENITIVE ENDING The most widespread form of the nisbe suffix is generally reconstructed for Proto-Semitic as *-īy-(e.g. Huehnergard (2006;2008), but I do not find any statement on a lack of lengthening in closed syllables there, nor in Huehnergard 2019; it may be in Huehnergard 2010, which I have not seen. 21 For the reconstruction of *w as this word's original second radical, cf. the Geʿez plural ʔa-faw with the ʔaprefix common in broken plurals, which was reanalyzed as ʔaf-aw and then gave rise to the secondary singular form ʔaf based on singular-plural pairs like ʔab : ʔab-aw 'father(s)'. On the reconstruction of word-initial consonant clusters like *pw-in Proto-Semitic and pre-Proto-Semitic, see Testen 1985;Blau 2006;Suchard 2017. 22 Biblical Hebrew II-wy G-stem imperatives also have a long vowel (Joüon & Muraoka 2009: § 80c): qūm 'stand up (m.sg.)', not expected *qūm > *qum > **qōm with shortening (and the later development of stressed *u to ō). This might offer further support for the lack of shortening in monosyllables both in Proto-Semitic and in the later shortening sound change affecting a more recent precursor of Hebrew. Alternatively, the form may simply be analogical with the plural, qūmū, or the Imperfect, *yaqūmu > yāqūm. Classical Arabic does show shortening in these imperatives e.g., qum (Fischer 1972: § 244). Studia Orientalia Electronica 9(1) (2021): 66-82 *-īy-(which is also often encountered as a transcription of the nisbe suffix), as in the Aramaic loanword *nabīy-> nabiyy-'prophet'. The Biblical Hebrew form -iyy-seen before suffixes, as in mōʔăḇ-iyy-ā 'Moabite (f.sg.)', is similarly the regular outcome of *-īy-before vowels, as is also seen in *naqīy-īma > nqiyyīm 'innocent (m.pl.)' (Suchard 2019a: 59); word-finally, *-īy-became -ī. In Old Babylonian, the nisbe ending contracts with the following case vowel, yielding the singular paradigm of nominative -ûm, genitive -îm, accusative -iam (Huehnergard 2011: 41); uncontracted forms occur in the rare numeral ištīʾum 'first' (Huehnergard 2011: 239) and regularly in Old Assyrian forms like Akkidium 'Akkadian' (Kouwenberg 2017: § 7.2.10), where a consonantal y is probably present before the case ending but not usually written. 23 The Old Assyrian forms are inflected like adjectives such as rabium 'big' < *rabiy-um and unlike nouns that may have ended in *-īy-, such as warīʾum or werīʾum 'copper'. 24 Hence, Kouwenberg interprets the shape of the nisbe suffix in Old Assyrian as -ī-or -iy-; as far as I am aware, this is also works for Old Babylonian. It does contradict the usual Arabic and Hebrew reflexes of the suffix, although Biblical Hebrew may also attest it in the noun ʔišše 'offering brought by fire' if this is to be reconstructed as *ʔiss-iy-, with a nisbe suffix with short *i attached to the reconstructed stem of ʔēš 'fire'. 25 Like other adjectives, those formed with the nisbe suffix were inflected for case in Proto-Semitic. The suffix was thus always followed by a vowel: nominative *-īy-um, genitive *-īy-im, and accusative *-īy-am if we favour the reconstruction based on West Semitic forms like Classical Arabic -iyy-, or *-iy-um, *-iy-im, *-iy-am based on the Old Assyrian and possibly Babylonian forms with *-iy-. As Kouwenberg's analysis already implies, it is attractive to identify the final *y as a glide that was automatically inserted between *ī and another vowel. The actual form of the nisbe suffix would then be *-ī-: the extended West Semitic form *-ī-yreflects the insertion of a glide after the suffix, while the Old Assyrian form *-iy-reflects the breaking of the long vowel into a short vowel and a semivowel. It seems unlikely that the case endings were only added to the nisbe separately in Proto-West-Semitic and Proto-Akkadian; perhaps both forms of the suffix coexisted in Proto-Semitic, or perhaps they reflect different parts of the paradigm, with one being generalized in Proto-West-Semitic (excluding possible holdovers like Hebrew ʔišše) and the other in Proto-Akkadian. 26 One may also wonder whether the glideless Biblical Hebrew plural forms like yhūḏ-īm 'Judahites' (sg. yhūḏ-ī), usually taken to reflect an ad hoc contraction of *-īy-īm to -īm, might not directly preserve the older form of the suffix in a context where no glide insertion was called for: *-ī-īma > -īm (although Biblical Hebrew forms where the glide is present, such as ʕiḇr-iyy-īm 'Hebrews', also occur). As will be clear to the reader, the hypothesis that *-ī-is the original shape of the nisbe suffix makes it formally identical to the triptotic genitive case ending *-ī-reconstructed above. This sits well with the frequent assertion that these two morphemes are historically related. We will Studia Orientalia Electronica 9(1) (2021): 66-82 now consider how both morphemes, as well as the alternative nisbe suffixes, might have arisen from one and the same form in a way that accounts for their attested distribution and usage. If the original shape of the nisbe suffix was *-ī-and the glide *y was added to facilitate the combination with the following case endings, this implies that at one point, *-ī occurred wordfinally, without case endings attached. It is impossible to say for certain whether this reconstruction should be placed in a precursor of Proto-Semitic that did not yet have case endings or whether adjectives formed with the nisbe suffix (or perhaps all adjectives) were uninflected for case at a time when other words were. In the speculative account that follows, I will add case endings to other words given in the reconstructed examples, but not to the oldest form of the nisbe; I ask the reader to bear in mind that this may be anachronistic. The second difference in nominal morphology that we need to posit for this hypothetical ancestor of Proto-Semitic is the absence or optional nature of mimation. The secondary nature of mimation has long been recognized, although scholars disagree about its original function. As it occurs on various parts of the noun phrase in Proto-Semitic, it seems plausible that it originally functioned as an article, whether it marked definiteness (as I find more likely based on the parallel development of the definite article in Eastern Aramaic; see Tagliavini 1929: 242;Gzella 2015: 337-338) or indefiniteness (thus, e.g., Brockelmann 1908: § 246Ca;Kienast 2001: § 139.5, following Osiander 1866Barth 1913: 130;Tagliavini 1929;Diakonoff 1988: 66-67;contra Gelb 1930;Lipiński 2001: § 33.16;Stempel 1999: 92). Hence, the same word could probably occur with or without mimation at a certain point, the unmimated form being the older of the two. Studies on grammaticalization in languages of the world have revealed that it is more common for words and morphemes to become more grammatical over time than vice versa (see Rubin 2005: 106;Norde 2011). Like many other scholars (e.g. Brockelmann 1908: § 245a; Kienast 2001: § § 135.3, 158.7), I will therefore assume that the genitive ending, a purely syntactically conditioned inflectional marker, developed out of the derivational nisbe suffix. 27 In contexts where mimation was not used, then, the original construction would have been that given in (1). (1) *bayt-u śarr-ī 'the/a royal house' Here, the nisbe suffix *-ī has been attached to the noun stem *śarr-'king' to form an adjective meaning 'belonging to the king, royal'. I assume that the construct chain, which also occurs in Ancient Egyptian and may thus well be of Proto-Afroasiatic age (see Diakonoff 1988: 63), already existed at this time, as illustrated in (2): (2) *bayt-u śarr-a 'the/a king's house' As these examples are set in a period before the grammaticalization of the genitive case, the nomen rectum occurs in the oblique case here, marked by the non-nominative case ending *-a. Studia Orientalia Electronica 9(1) (2021): [66][67][68][69][70][71][72][73][74][75][76][77][78][79][80][81][82] Given the semantic closeness between 'the/a royal house' and 'the/a king's house' and based on construct chains like (2), (1) could be reanalyzed as a construct chain: 28 (1′) *bayt-u śarr-ī 'the/a royal house' or 'the/a king's house' In many regards, construct chains are treated as single words for syntactic purposes (cf. Lipiński 2001: § 51.24). Accordingly, in contexts where mimation was added, it was added to the end of the construct chain as a whole. For speakers who interpreted (1′) as a construct chain, this would yield: (3) *bayt-u śarr-ī-m 'the/a king's house' Applying the sound change shortening long vowels in closed syllables then gives us the Proto-Semitic situation, with a bound state followed by a triptotic genitive noun: (4) *bayt-u śarr-im 'the/a king's house' It seems likely that the formal similarity of this new ending *-im to the mimated nominative *-um and original oblique *-am facilitated its identification as a productive, case-inflected form of the noun, obscuring its adjectival origin. This reanalysis as a case marker would have gone hand in hand with the extension of the unmarked and hence originally masculine singular form *-im < *-ī to contexts where the adjective would have displayed feminine or plural agreement: where we might originally have expected a different form in hypothetical pre-Proto-Semitic phrases like *bint-u śarr-ī-t 'the/a royal daughter' or *ban-ū śarr-ī-y-ū '(the) royal children' -the reconstructions are obviously highly questionable -this variation is no longer seen in Proto-Semitic *bint-u śarr-im 'the/a king's daughter', *ban-ū śarr-im 'the/a king's children'. This kind of generalization of one part of the paradigm, which Rubin (2005: 5) refers to as decategorialization, is very common in instances of grammaticalization like the one proposed here. Over time, the newly grammaticalized genitive ending came to replace the more general oblique ending in its function of marking the nomen rectum, relegating the latter to the functions known to us as those of the Proto-Semitic accusative. Based on the genitive's use after prepositions that are historically bound states of nouns, such as *bayn-a 'between' < *'in the intermediate space of' and *taḥt-a 'under' < *'at the bottom of', it then spread to words governed by prepositions that do not transparently derive from bound states, like *bi-'in'. Since the nisbe adjective is derivational and based on the uninflected stem of the noun, no separate forms were created based on the noun's dual or plural forms. Accordingly, no separate genitive case was 2 grammaticalized for the dual and plural paradigms, and the old distinction of nominative vs. oblique was maintained here. 29 By other speakers, or in other contexts, (1) would still have been analysed as a noun followed by an attributive adjective. 30 At some point in the development leading up to Proto-Semitic, the adjective in this construction came to be inflected for case, agreeing with its head, while both words received mimation in the appropriate context. Providing for the two different strategies of glide insertion and vowel breaking, this gives us: (5) *bayt-um śarr-ī̆-y-um 'the/a royal house' The addition of mimation thus split the nisbe ending *-ī into the triptotic genitive ending *-im, which replaced the old oblique ending in some contexts, and the Proto-Semitic nisbe ending *-īy-/-iy-. In some nouns, however, this split failed to take place. An example of such a noun, which is reconstructible for Proto-Semitic and attested as a name in several Semitic languages (Dirbas 2019: 215), is given in (6): (6) *bayt-u liʔ-at-a 'Leah's house' Taking (6) the steps leading to the fully inflected Proto-Semitic nisbe results in the following: Note that the feminine suffix *-at-is lost before the nisbe suffix, a morphological process that is broadly attested, as in Old Babylonian šubar-t-um → šubar-ûm 'Subarian' (von Soden 1995: § 56q), Biblical Hebrew yhūḏ-ā → yhūḏ-ī 'Judahite' (Joüon & Muraoka 2009: § 88Mg), and Classical Arabic makk-at-→ makk-iyy-'Meccan' (Brockelmann 1908: § 220dα). 31 Considering the occurrence of this process in far-flung branches of Semitic and the lack of any clear motivation that would account for parallel innovation, it should be reconstructed for Proto-Semitic and is therefore reflected in the examples given here. As *liʔ-ī, etc. are adjectives, they agree in gender with their head (masculine *bayt-u(m) in these examples), not with the noun they are derived from (feminine *liʔ-at-u); hence the absence of a feminine agreement marker *-(a)t at the end of the adjective. Importantly, some Proto-Semitic nominals never took mimation, for semantic and/or syntactic reasons that are still poorly understood. As nisbe adjectives are not included in this group, the adjective is shown to incorporate mimation in (8). The feminine name *liʔ-at-, however, was chosen here as an example of these unmimated nouns (Arabic: mamnūʕ min al-ṣarf). If a speaker were to reanalyse (7) as 'Leah's house' parallel to the reanalysis of (1) as 'the/a king's house', mimation would thus still not be added, since *liʔ-ī would be interpreted as part of the (mimationless!) paradigm of *liʔ-at-. 32 Without mimation, the conditioning would be lacking for the old nisbe suffix *-ī-m to be shortened into the new genitive ending *-im, formally so similar to the other mimated case endings. Instead, these unmimated nouns retained their old inflection markers: nominative *-u and oblique *-a. With the spread of mimation and the new genitive case to many other words, this class of words became marginalized and isolated; while faithfully preserving the old situation, they had now become an exceptional category, known to us as the diptotic declension. As most other singulars and broken plurals had become triptotic, many languages extended the triptotic declension to these few remnants of the original nominal inflection as well, abandoning diptotic inflection altogether. CONCLUSION In summary, I have argued that the long -ī-vowel of the Akkadian genitive ending before pronominal suffixes originally occurred in other parts of the paradigm as well. Reconstructing the (genitive) bound state ending -i as long *-ī as well makes the paradigm more regular and provides a phonetically plausible reason why this ending was preserved word-finally while the nominative and accusative endings, *-u and *-a, were lost. Comparing this Proto-Akkadian genitive bound state ending *-ī to its Proto-West-Semitic counterpart *-i, we find that the latter can easily have been analogically shortened, while nothing accounts for the length of the former; *-ī should thus also be reconstructed for Proto-Semitic. Based on the indications for shortening of long vowels in closed, word-final syllables in Proto-Semitic, we can go a step further and reconstruct the unbound genitive ending as *-ī-m for a recent ancestor of Proto-Semitic, restoring the regular relationship between the bound and unbound forms of the triptotic case endings. Interpreting the nisbe suffix's *-y-as a glide that was automatically inserted to break up the hiatus between *-ī and the following case vowels, we have tentatively identified *-ī as the oldest form of the nisbe suffix as well. The homonymy with the reconstructed triptotic genitive ending provides us with a concrete (albeit speculative) scenario elaborating on the long-suspected relationship between these morphemes. In this scenario, *-ī was originally attached to the noun stem to form a derived adjective. With glide insertion or breaking when case endings were added to this Studia Orientalia Electronica 9(1) (2021): 66-82 nisbe adjective at a later point in time, this yielded the most common nisbe suffix, *-īy-/*-iy-. In other circumstances, the nisbe adjective was reinterpreted as an inflected form of the noun from which it was derived, marking the nomen rectum of a construct chain. If the noun took mimation, this was added to the nisbe suffix, yielding *-ī-m and later becoming the unbound triptotic genitive state ending *-im with the same vowel shortening reconstructed on internal grounds for the words like *ʔab-ū-m > *ʔab-um 'father (nom.)'. In longer construct chains, only the last word took mimation: thus, long *-ī was preserved in the bound genitive in cases like *bayt-u ʔil-ī śarr-ī 'the/a royal divine house' > *bayt-u ʔil-ī śarr-im 'the/a king's god's house'. Nouns that would later come to form the diptotic declension did not take mimation, which kept the nisbe suffix from being shortened and fully grammaticalizing into the genitive case. Finally, nisbe adjectives were not derived from inflected, dual and plural forms of the noun. 33 Hence, no separate genitive case evolved for any of these categories in the unbound state. In Akkadian, the Proto-Semitic inflection of the genitive was preserved: unbound *-im, bound *-ī. In West Semitic, the model of unbound nominative *-um : bound nominative *-u and unbound accusative *-am : bound accusative *-a resulted in analogical shortening of the bound genitive ending's vowel: unbound genitive *-im : bound genitive *-i. It may have been at this point that this new bound state genitive ending was analogically extended to the diptotic declension, as reflected in Classical Arabic; this would later form the basis for the triptotic inflection of otherwise diptotic nouns following the definite article al- (Fischer 1972: § 152). Ultimately, the defining feature of the diptotic declension would thus seem to be its lack of mimation (or nunation in Arabic); if the account presented here has some historical accuracy, it was the absence of mimation that led these words to preserve the two-case inflection which has given them their name in the Western grammatical tradition.
2021-10-20T15:59:53.818Z
2021-09-12T00:00:00.000
{ "year": 2021, "sha1": "fe1b73feb60816204129d4245ea20072430c0aed", "oa_license": "CCBYNCSA", "oa_url": "https://journal.fi/store/article/download/98387/65307", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4bef578533148312240f078af2ea95783d78996d", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Philosophy" ] }
151761817
pes2o/s2orc
v3-fos-license
Bridging the Macro with the Micro in Conflict Analysis: Structural Bridging the Macro with the Micro in Conflict Analysis: Structural Simplification as a Heuristic Device Simplification as a Heuristic Device This chapter presents a theoretical argument that looking at how some grand matters of politics are simplified for practical use on the street is necessary to adequately understand how ordinary Serbs and Croats (and to a limited extent, Muslims) were transformed into enemies of their neighbors, workmates, and covillagers in the havoc wrought in Bosnia-Herzegovina between 1992 and 1995. Locals’ shifting attitudes toward consanguinal identity, expressions of greeting, and dressing patterns are found to be examples of everyday practices through which perceived differences in civilization, competitive ideas of statehood, and macro-constructions of group identity produce ethnic conflict. A broad conclusion is that attention to localized manifestations of the macro-political will yield more comprehensive understanding in analyses of ethnic conflict. developing a conceptual tool with which macro-and micro-level factors can be bridged to yield a better understanding of collective violence and its consequences. In a recent, insightful article, the anthropologist Marshall Sahlins created the phrase ''elementary form of structural amplification'' (Sahlins, 2005, p. 25) with which he intended to capture the process in which a conflict characterized by its local nature is inflated to the supralocal level. Charting the course of Cuban and US governments' and publics' heavy involvement in what was otherwise an oft-repeated and ordinary state of affairs, 2 Sahlins documents how the little Elian Gonzalez became entangled in discussions of communism, freedom, and the Cold War. The fight over Elian's custody, waged initially between Elian's relatives in Miami and Cuba, engaged the larger ideological opposition between American and Cuban governments and publics. Sahlins refers to the process wherein a minor, localized dispute engages a broader set of opposition as ''structural amplification'' which makes a macrohistory out of a microhistory (Sahlins, 2005). In my discussion below, I chart the course of a reverse process -one in which an ethnonationalist and exclusivist discourse gets appropriated in a village whose inhabitants have otherwise been living in relative peace and harmony. In other words, I look at how ethnonationalist macropolitics gets deflated only to be reconfigured within the power relations in a rural context. I will thus appropriate Sahlins' term with a slight modification: ''Structural simplification'' of ethnonationalist exclusivism in a central Bosnian village, or the reconfiguration of power relations in a rural context out of macropolitical discourse. In the first section below, I will provide a brief account of the 1992-1995 war in the Balkans which ended with the peace agreement signed in Dayton, Ohio in 1995 during the Clinton presidency. The second section discusses some macrolevel phenomena whose on-the-ground appropriation will be treated in the third section, where I rely on Bringa's (1996) ethnographic analysis of a village in central Bosnia called Dolina (a pseudonym). In the fourth section, I seek to shed some anthropological light on the subject matter with reference to the work of such political anthropologists as Friedman (1998) and Tambiah (1996). I end the paper with some concluding thoughts, inspired by Lewellen (2003) and Gledhill (2000) AU :1 , on the benefits of anthropological thinking for a better understanding of the processes in which grand concepts such as history, ethnicity, and religion get dissolved and find parochial manifestation. As a result, power relations in a given microcosm may come to be conceptualized rather differently compared to how they were before, with the end result being ''collective violence.'' A BRIEF HISTORY OF THE 1992-1995 Relying on Norman Cigar's (1995) benchmark study Genocide in Bosnia, I would like to emphasize in concise form some of the important overarching factors instrumentalized by the Serbian political decision-making mechanism to induce Serbian public opinion into believing in the legitimacy of Serbian government's dream of achieving a pure ''Greater Serbia'' at the expense of other ethnic, non-Serb groups populating Bosnia-Herzegovina. These factors pertain to competitive ideas of statehood in the post-Tito era, normative constructions of ethnic superiority and vulnerability, and the supposed threat of escalating radical religious (Islamic) fundamentalism. Two other factors, voiced by some American writers such as Robert Kaplan and Samuel Huntington, include the idea of the ever-presence of historically embedded ethnonational rivalry and hatred, and the notorious ''clash of civilizations'' thesis as they apply to the region. Whereas Cigar's account of macrolevel factors is well documented and evidenced, Kaplan's journalistic impressions regarding the causes of violence and Huntington's remarks, which I briefly look at below, will exemplify in particular why top-down analyses should be corrected and complemented by views ''from below.'' The Grand Picture and Its Dominant Colors Cigar (1995) traces the roots of Serbian nationalist expansionism, whose culmination was the war, to the goals explicitly voiced in a document produced in 1986 by the Serbian Academy of Arts and Sciences, the Serbian Memorandum. Drafted in a Westphalian spirit, this document envisaged the foundation of a pure Serbian state encompassing all Serbs regardless of which former Yugoslav republic they were living in. In Cigar's words, ''Coming at a time of impending change and uncertainty, the Memorandum seemed to answer the need for a national strategy blueprint for Serbia'' (Cigar, 1995, pp. 23-24). The implementation of the Memorandum could only come about by uprooting other ethnicities of the former Yugoslav republic, which is precisely what Serbian nationalism sought to do with the war, as indicated by the forced displacement of several hundred thousands of Bosnians now scattered across Europe and the United States. Thus, the post-Tito Serbian nationalism found its most obvious expression in the statements of Serbia's academic elite. This was followed by the stereotypification of would-be victims, in particular Bosnian Muslims, in and through popular culture. An example discussed by Cigar (1995) is the description of Bosnian Muslims as aliens, inferiors, and cold-blooded murderers by a best-selling novelist named Vuk Draskovic, whose writings influenced no less a figure than the commander of the Serbian Guard,who admitted to having beaten Muslims (and Croatians) because of the fury ingrained in him through these writings (1995, p. 25). Next came the work of Serbian scholars specializing in the study of Islam. This work represented Islam and its adherents as backward, hostile to European civilization, and fundamentalist masterplanners of Serbian destruction. This work further disseminated the idea, frightening to the average Serb, that there were plans to repatriate more than a million Turks to Bosnia, which clearly would contribute to the Islamization of Bosnia-Herzegovina in the post-Tito era (and would indicate a reembracing of the spirit of the Ottoman Empire, which ruled over the Balkans for more than four centuries). Serbian scholars felt that these developments should be countered by any means possible. This academic effort was then bolstered by the efforts of the Serbian Orthodox Church whose representatives evidenced their claim of Muslim primitiveness by pointing to the fact that walls were built around (Muslim) Albanian houses, which to them demonstrated that Muslims (especially Muslim women) were not liberated, and ''hidden behind walls'' (Cigar, 1995, pp. 27-32). The Memorandum, Serbian popular literature, the denigrating work by Serbian scholars of Islam, and the Serbian Orthodox Church's efforts were thus factors in the escalation of Serbian ethnonationalist exclusivism which culminated in a 4-year war between 1992 and1995 against non-Serb ethnicities. Although the macrophenomenal reality of these factors and their influence are well illustrated in Cigar's work, a more comprehensive understanding requires ethnographic particularism to visualize the processes and mechanisms in and through which such macrophenomenal realities are effectively parochialized -or structurally simplified. This will help in answering the question: ''How were people [of the Balkans] who had lived quietly together as neighbors for forty-five years [since the end of second World War] manipulated into killing one another and burning each other's houses down?'' (Besteman & Gusterson, 2005, p. 7) Whereas accounts of the conflict in the Balkans such as that of Cigar would get enriched and not necessarily refuted or corrected by an anthropological approach, other works on the Balkans would probably have to be rewritten in view of the insights provided by an anthropological perspective. Such two works on the two supposed causes -ancient hatreds and civilization clash -of the war are Robert Kaplan's Balkan Ghosts (1993) and Samuel Huntington's The Clash of Civilizations (1993). Following is a brief overview of the arguments of these two works, and anthropological critiques of them. For Robert Kaplan (1993), the collective violence in the Balkans was a modern-day reincarnation of ancient ethnonational feelings of hatred that all sides partaking of the violence had been breeding against one another since time immemorial. The ''Balkan syndrome,'' as he termed it, was something like an evil gene predisposing Balkan people toward erupting in violence. Hence, there is not much reason to be startled at the atrocities that the Balkan people meted out against one another. In a devastating critique of Kaplan's travelogical assumptions (my term, by which I intend to convey a sense of the unreliability of such sweeping generalizations which do call for an attention to detail of the anthropological kind), anthropologist Tone Bringa (2005) sets the record straight. Building on her fieldwork in central Bosnia (which I relate in more detail below) for a total period of 6 years, from 1988 to 1993, Bringa notes in her critique of Kaplan's work that before the war in the ethnically mixed village (Muslim Bosnians and Catholic Croats) where she carried out her fieldwork, adherents of the two separate religious communities helped each other build the village church and the mosque, attended one another's holy days, and extended a hand to one another while building houses. These observations on the ground refute Kaplan's overgeneralized, impressionistic statements about the violent nature of the region's inhabitants. The ''ancient hatreds'' argument is further contradicted by the work of another anthropologist Lockwood, who, as early as the 1970s, documented in his ethnography The European Moslems (1975) how Serbs, Croats, and Muslims were peacefully woven into the social fabric through the integrative mechanism of the marketplace. In a tone somewhat more sophisticated and ostensibly more scholarly than that of Kaplan, Samuel Huntington sees the violence in Bosnia as an instance of a clash of three civilizations, namely the Western, Islamic, and Eastern Orthodox ones. This was a war occurring at what Huntington (1993) named a civilizational faultline. His analysis was contradicted when the Christian United States brokered the peace agreement -thus possibly saving Bosnian Muslims from extinction on a much larger scale than had happened thus far -and also accommodated hundreds of thousands of Bosnian Muslims as refugees during and after the war. 8 As noted by anthropologist Brown (2005), Huntington's theory that countries belonging in the same ''civilizational kin group'' (a term invented by Huntington, who is not a kinship theorist) was discredited by on-the-ground empirical reality. Based on his fieldwork in the region, Brown exposes how the kin links that Huntington thought were so clearly identified were much more complex given the institution of fictive kinship in the Balkans whereby people became related to one another through kumtsvo (godfatherhood) ties, which crosscut so-called civilizational attachments. I suggested at the beginning of this paper that explanations of collective violence based solely on the macro concepts of state, nation, religion, and history tend to remain rather rigid. With reference to various treatments of the Bosnian war, I emphasize that a view from below would either substantially complement such accounts (as in the case of Cigar's macrophenomenal account of the causes of Serbian atrocities) or expose the irrelevance of them to concrete situations experienced by real human beings in real locales (as in the case of Kaplan's and Huntington's accounts of the factors behind the escalation of collective violence). An anthropological approach seems better suited to help understand otherwise unexpected cases of violence: How did ethnonationalist exclusivist discourse get structurally simplified to the village level, as a result of which neighbors, covillagers, perhaps old-time friends and confidantes turned against one another? 9 The following section seeks to describe instances of structural simplification by relying on the ethnographic work of anthropologist Tone Bringa in a central Bosnian village. By structural simplification, I mean that process in which a larger opposition between two overarching identities gets parochialized through the identification of any such overarching identity with its local counterpart. In this process, the differences invoked at the macrolevel (discursive, or otherwise) between the larger forces of opposition are simplified and selectively appropriated to forge new identities, filling in, or overriding, a preexisting set of local relations with new and mutually oppositional content. The following brief discussion seeks to demonstrate the dissolution and parochialization of exclusivist nationalism in the context of the relations between Catholic Croats and Muslim Bosnians. Although the foregoing discussion has focused on the development and outcomes of Serbian nationalist aggression, in this paper I am less concerned with the origins of the ethnonationalist discourse than with the actual dynamics involved in the process of structural simplification. (Bringa, 1996, p. 30), which seems to be a telling example of what may be termed ''consanguinal expansionism.'' 10 This demonstrates the structural simplification of Serbian academic exclusivism (which considers Bosnian Muslims nonentities except when they are considered Serbs) to the village context. Another instance indicative of the simplification of supralocal nationalist rhetoric becomes manifest through villagers' changing greeting practices. While in the public space of communal interaction, village inhabitants came to use ethnicity-or religion-neutral phrases of greeting when they encountered one another during various times of the day and on different occasions (on the road, while attending a feast, in neighborly visits, etc.), and they reserved exclusive greetings for intraethnic encounters (Croat vs. Croat, or Muslim vs. Muslim). Eventually, the escalating symbolicdiscursive and physical violence found a localized manifestation: once Croat forces gained control of the municipality to which Dolina belonged, Croat-specific greetings dominated the public realm (for example, the dealings at administrative offices and in the marketplace), thus extending macrolevel ethnic exclusivism (the idealized ''Greater Croatia'') to the parochial level by exerting linguistic dominion over a particular portion of everyday life through the imposition of a new greeting structure. As Bringa notes: ''Indeed, the Catholic Croats were redefining the whole area (market town and surrounding villages) as ''theirs'' and transforming the local Muslims into outsiders, people who did not belong, [which] was one of the many steps in a long series of more or less violent measures to squeeze the Muslims out of their villages and the municipality'' (1996, pp. 57-58). Yet another example of the simplification of high-level nationalist politics whereby Bosnian Muslims were represented as remnants of Asiatic darkness and backwardness, relates to Dolina's Catholic (Croat) girls' changing perceptions of Muslim girls' dressing patterns. One of Bringa's Croat informants in the village notes that whereas they have left the ways of their parents' choices in clothing behind (and have thus become less and less separable from the modern urban woman), Muslim girls keep more and more to their ways. The expression seems to be a subtle practice of ''othering'' whereby Muslim girls are pushed into the categories of rural and traditional (Bringa, 1995, pp. 61-62). What is interesting, of course, is the emergence of an otherwise nonexistent practice. Although each group of girls' parents did not conceptualize one another in terms of their differing clothing practices, the nationalist rhetoric -disseminated through broadcast media, enforced as law in the emerging, ethnically-drafted constitutions (Hayden, 1996) -results in the creation of a simplified mirror image of differentiation and othering in the village context via changing perceptions regarding a group's dress. With reference to Bringa's work, we have seen some examples of how macrolevel nationalist discourse manifests itself in a village in the context of consanguinal perceptions, expressions of greeting, and dressing patterns. 11 What follows is review of some key observations made by a number of political anthropologists regarding localized manifestations of macrolevel discourses which may result in changed perceptions of old friends and existing relations. ETHNIFICATION, FOCALIZATION, AND TRANSVALUATION: RELEVANCE OF POLITICAL ANTHROPOLOGY TO ANALYSES OF THE WAR IN BOSNIA The structural simplification process as a result of which old-time fellows, covillagers, and neighbors begin to subtly perceive one another through a reconfigured framework of relations -that is, perceive one another as belonging to different natures, historical roots, and linguistic groups -can be referred to as a case of ''ethnification.'' Although anthropologist Friedman (1998) uses the term ethnification as part of his Marxist approach with which he seeks to explain expressions of declining hegemony, the term has descriptive utility in the context of the war in Bosnia. In particular, Friedman suggests that ethnification, the turn toward an understanding of the nation-state ''yin which the nation is dominant, where the nation-state is converted from a contractual to a familistic-ascriptive model'' (1998, p. 288) is an expression of the decline of a civilizational perspective based on commercial capitalism. Thus, from the Titoist social contract in which Serbs, Croats, Muslims, Slovenes, and Montenegrins were ''Yugoslav'' emerged exclusivist, ethnified understandings of separate families of nations (for instance, the Serb nation idealized in the Memorandum as the ''Greater Serbia'') which admitted of no aliens: hence, Balkanization ensues . In other words, regional disorder was followed by huge migratory flows and demographic exchanges in the Balkans, specifically in Bosnia, as the result of a war guided by a macropolitical ethnified perception of state which dictated intrastate homogeneity (that is, Serbia for Serbs, Croatia for Croats). The term can have both macrolevel and microlevel application. The Serbian villager's remark that the others too are of Serbian blood may be considered an expression of homogenizing ethnification by which the ''others'' are precluded from having the right to their own identity. Furthermore, the increasing visibility of Croat-specific greetings in the public space could be seen as another expression of homogenizing ethnification by which the ''others'' are precluded from the reconstructed public space should they decline to conform to the new linguistic habits. Two other concepts by another anthropologist Stanley Tambiah (1996) may serve as useful heuristic devices in the context of the analysis of the war in Bosnia: focalization and transvaluation. ''By focalization [Tambiah means], the process of progressive denudation of local incidents and disputes of their particulars of context and their aggregation. Transvaluation refers to the parallel process of assimilating particulars to a larger, collective, more enduring, and therefore less context-bound, cause or interest'' (Tambiah, 1996, p. 192). I introduce these terms not because they are used in Tambiah's (1996) work to describe processes similar to those I have called instances of structural simplification, but because they illustrate the reverse trends (in other words, they capture what Sahlins would call structural amplification). For example, Tambiah employs these two terms while describing ''how the original issue of the death of a schoolgirl ballooned into a more general protest against the inequities of the public transport system, and that, again, into an anti-Pathan backlash'' (1996, p. 191). As I noted in the beginning, I am interested in the reverse process by which general, macrolevel conflicts and exclusivist discourses are parochialized by the receivers of such discourses. Tambiah's terms may help describe the process whereby, for example, Serbian historiography strips the Battle of Kosovo in 1389 (where Serbs were defeated by the invading Ottomans) out of its context, and instrumentalizes that event by trying to assimilate the memory of it into the larger Serbian macropolitical objective vilifying the Muslims of Bosnia (who converted to Islam following the Ottoman conquest, and therefore, assumed the identity of the invader, as claimed by Serbian historiography). Thus, by placing these two terms against structural simplification, I hope to have made my terminological suggestion clearer. CONCLUDING THOUGHTS In this chapter, I first raised the idea that blaming collective political violence on differences in civilization, competitive ideas of statehood, and normative constructions of ethnonational group identity reveals very little, if at all, about how these differences, competitions, and vying constructions manifest themselves in the everyday practices of victims as well as perpetrators of destructive political conduct. In fact, interpreting collective violence as mere consequences of top-down orchestrations is limiting the political to the realm of governments, political parties, nationalist leaders, etc. Without looking at how the political is embedded in everyday practices, how it manifests itself through real human beings' dealings in such real locales as the village, the street, and the marketplace, one is unable to understand in their multifaceted dimensions the complex processes as well as instruments in and through which objectives declared, legitimized, or forced by the governmental or ruling elite get accepted and/or rejected by their addressees. Thus, when critiquing political scientist David Easton's view that there existed no such thing as political anthropology because ''practitioners of this nondiscipline had utterly failed to mark off the political system from other systems of society'' (Lewellen, 2003, pp. x-xi), Lewellen notes that the attempt to locate politics in everyday routines is in fact political anthropology's greatest virtue. The discussion in this paper of some instances of structural simplification would show to some extent that events in former Yugoslavia at the level of what Easton would call the ''clearly marked off political system'' need to be complemented and/or corrected with an eye on micropolitics. Unless we conceptualize the increased use of Croat greetings in public spaces, the commentary on Muslim girls' (''backward, rural'') dressing patterns, the attempt by the Serbian villager at enhancing the scope of consanguines as truly political phenomena in view of the then-reigning nationalist rhetoric, we are bound to fall short of understanding the 1992-1995 war in Bosnia in its complexity. The heuristic devices of structural amplification, focalization and transvaluation, are useful in conceiving of the aspect pertaining to how historically specific, localized cases are ballooned or inflated for utilization as part of larger nationalistic discourses. With the idea of structural simplification, however, we can conceptualize how broader macrophenomenal realities are locally parochialized and manifested in everyday practices. My hope is that structural simplification will yield greater understanding of what happened in Bosnia as well as serve as a useful conceptual tool in future research on political conflict. NOTES 1. Including, but not limited to, so-called ancient ethnonational hatreds, religious radicalism, and historically motivated territorial irredentism. In general, the adjectives macropolitical, macrostructural, and macrophenomenal are used in this paper to refer to those supraindividual groups, entities, or factors (''the nation,'' ''the state,'' ''history,'' ''religion,'' etc.) otherwise claimed to have an exclusive causative impact on the emergence and sustenance of political conflict. 2. By which Sahlins (2005) means a group of Cubans escaping Castro, traveling in a boat (or some other craft), fighting sharks across the straits of Florida as well as the US Coast Guard, and, if successful, landing in Miami. 3. I recognize that the history of the 1991-1995 Balkan conflict, which resulted in the collapse of former Yugoslavia, is a contested one. But this paper should essentially be construed as a theoretical exercise, rather than as an attempt to explain why one set of contested explanations is preferable over another. My broader aim is to apply an inversion of anthropologist Marshall Sahlins' theoretical constructs with a view to developing a heuristic device to link macrolevel factors to microlevel practices. Given the limits of this paper, I cannot do justice to all accounts of the conflict that seek to explain it from various angles. Readers interested in getting a much fuller discussion of the contested accounts may consult Ramet (2005). 4. Kosovo was an autonomous region under the Serbian republic in the former federal Yugoslav state. 5. In particular, Croatia, Bosnia, and Slovenia. 6. Including Bosnians of different ethnic backgrounds, that is, Bosnian Muslims (major victims), Bosnian Croats, and even Bosnian Serbs who refused to acquiesce to the cleansing project. 7. For more detailed accounts of the war, see Cushman and Mestrovic (1996), Mestrovic (1997), Cohen (1997), and Burg and Shoup (2000). 8. I do not have the space here to extend this critique of Huntington's work. I offer a longer discussion in Keles (2007). 9. One reviewer who commented on this article suggested that ''it seems to be the macro-level politics and rhetoric (the ethno-nationalist ideal of ethnically pure interaction) that is a simplification of the complex pattern of interaction on the local level,'' rather than local level interactions being simplified, less complex versions of macrolevel discursive battles. Ultimately, this boils down to the question of whether macrolevel factors (for instance, nationalist, political leadership) met the public already raising exclusivist sentiments, or whether the public (otherwise relatively peacefully interwoven through the marketplace, intermarriage, and educational institutions) subsequently grew suspicious of one another's neighbors, coworkers, etc. Following the first track runs the risk of feeding fodder to the uncritical thesis that imagines the Balkans as a land of perpetual violence, where past grievances are never settled and latent hate is the order of the day. I am more inclined to the latter track, in view of the former U.S. President Clinton's foreword to the volume by Swanee Hunt (2004), former U.S. Ambassador to Austria, where Clinton noted: ''As the war raged in Bosnia, Hunty brought to my attention news not making headlines: that the women of Bosnia had been organizing to try to prevent the war, and they were still doing what they couldy to hold together their culturally diverse communities.'' Consider also what one Bosnian woman, Nurdzihana, said after the war: ''I've never accepted ethnic divisions. The way I was raised, we didn't say someone belongs to this or that ethnic group. The atrocities I witnessed had no ethnicity, no religion. We lived together until the day before'' (Hunt, 2004, p. 95). 10. By which term I want to refer to that effort to expand one's range of blood relatives, hence including them into an imaginary ''one of us'' category. 11. I acknowledge that the illustrations excerpted from Bringa's work tell only part of the story in the run-up to the war. For more detailed examples of pre-war (that is, pre-1990) happenings, one can peruse Bringa's ICTY testimony available at http://www.un.org/icty/transe16/990712it.htm (I thank an anonymous reviewer for bringing the testimony to my attention). There, Bringa discusses at some length how the increasing Croat military presence in the region and the repeated, Croatcontrolled media broadcasts instilled a sense of fear which reconfigured the way in which Croat inhabitants came to see their long-time covillagers as ethnic others. What seems to have emerged from complex military objectives and carefully planned broadcasts is a divisive process that produced simple, previously nonexistent, and ethnically defined ''us versus them'' perceptions of a hostile nature.
2017-09-07T06:57:55.657Z
2008-11-13T00:00:00.000
{ "year": 2008, "sha1": "1b7efb229a423912de9b4fc05298466da0d9423a", "oa_license": "CCBY", "oa_url": "https://surface.syr.edu/cgi/viewcontent.cgi?article=1009&context=ant", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "0c3645c77c7bb7c8b030c2c576fd53fafde3ee6a", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
2600053
pes2o/s2orc
v3-fos-license
The many facets of adaptation in fly visual motion processing Neuronal adaptation has been studied extensively in visual motion-sensitive neurons of the fly Calliphora vicina, a model system in which the computational principles of visual motion processing are amenable on a single-cell level. Evidenced by several recent papers, the original idea had to be dismissed that motion adaptation adjusts velocity coding to the current stimulus range by a simple parameter change in the motion detection scheme. In contrast, linear encoding of velocity modulations and total information rates might even go down in the course of adaptation. Thus it seems that rather than improving absolute velocity encoding motion adaptation might bring forward an efficient extraction of those features in the visual input signal that are most relevant for visually guided course control and obstacle avoidance. Visual motion plays an important role in behavioral control. 1 Blowflies are equipped with a set of visual motion-sensitive tangential cells (TCs). 2 These neurons have large receptive fields and a high selectivity for visual motion patterns occurring during certain translational or rotational movements. 3,4 Visual image flow experienced by a fast flying animal like a blowfly changes dramatically in its intensity and statistical properties depending on the environment and, in particular, the animal's current flight maneuvers. 5,6 This may pose a problem to the neuronal machinery, because neuronal input-output functions are inevitably constrained by thresholds and saturation limits. As a consequence, the working range in which a neuron can effectively respond to small changes in input intensity with a high signal-to-noise ratio can be much smaller than the range of inputs that may be encountered. 7 It is thus not surprising that in the first accounts on adaptation in fly TCs it had been assumed that motion adaptation leads to an improved encoding of changes in velocity by aligning the neuronal velocity-response function with the range of velocities currently present in the input. 8,9 The results of a recent study are hardly compatible with the notion that motion adaptation leads to an improved representation of motion velocity by fly TCs, e.g., by tuning neuronal input-output functions to the current demands. 10 In this study TCs were stimulated with a drifting grating, which changed its velocity in a random fashion (Fig. 1). When analyzing how well the time-varying stimuli can be recovered from the neuronal responses by reverse reconstruction it turned out, that adaptation led to a change for the worse rather than an improvement. A change in reverse reconstruction performance can have two reasons. The first one is a decrease in signal-to-noise ratio in the course of adaptation. Surprisingly, this was not the case, although the neuronal response amplitude was nearly halved with adaptation. The second possibility is that adaptation leads to an increase in nonlinear stimulus processing by the neurons. Nonlinear processing would degrade reverse reconstruction, because this is based entirely on linear filtering. Thus it was shown that stimulus encoding by TCs shifts from a fairly linear representation of velocity modulations in the non-adapted state to a more and more nonlinear representation in the adapted state. Which type of nonlinear process is becoming more and more relevant in visual motion processing by fly TCs in the course of motion adaptation? And what functional benefit results from nonlinear processing-a benefit that may count more than precise tracking of absolute velocity? First of all, the study gives a hint where this adaptation-induced change in coding is located. Two types of TCs were studied, one of which, the VS-neuron, is thought to receive direct input from local motion-detector elements in the periphery. The second type of TC, the V1-neuron, is postsynaptic to the first and delivers motion information to the contralateral brain hemisphere. Adaptation effects were similar in the two cell types, indicating that they are either generated in the presynaptic neuron or even further in the periphery. Second, it was observed that in the adapted state fairly strong neuronal responses were present when the velocity of the grating pattern changed abruptly, whereas the neuronal response during phases of comparatively slow velocity changes was attenuated more. Hints into a similar direction come from another recent study, which addressed the impact of adaptation on information transmission by a fly TC. 11 In this study the neuron was adapted to different velocity stimuli and the responses were fit by a correlationtype motion detector model to assess which of the model parameters change with adaptation. It was found that a shortening of the time constant of a high-pass filter in the periphery provides the best explanation for the observed adaptation phenomena. Such a change could have the effect of emphasizing abrupt changes in the input at the expense of slowly varying inputs. Intriguingly, the system's overall information transmission was not optimized by adaptation. In contrast, a model in which the peripheral high-pass filter was held in the non-adapted state reached higher information rates than a fully adaptable model. The recent findings described above are in some respects reminiscent of the results obtained in the first report on motion adaptation in fly TCs. 8 There, a drifting grating was presented that had a constant velocity over a sustained period of time, apart from brief steps to a higher or a lower velocity. The neuronal response to these velocity discontinuities was enhanced with adaptation, although the response to baseline velocity was drastically reduced. Originally, it was suggested that adaptation caused a shift in the velocity-response function. However, two observations render this obvious explanation unlikely and imply more complex adaptation-induced changes. First, shifts in the velocity-response function with adaptation appear not to be present in TCs. This was initially shown for adaptation with constant velocity 12 and recently corroborated when using randomly modulated velocity. 10 Second, an enhanced sensitivity to stimulus discontinuities can be found also when other stimulus parameters than motion velocity are transiently changed, e.g., pattern contrast. 13 What does this mean in a functional context? When the system's overall excitability is reduced with adaptation, but abrupt changes in any of the parameters of the stimulus are still able to elicit strong responses, the system might operate as a "novelty detector". The idea of improved novelty detections by adaptation has called much attention in the auditory system. 14,15 Here it is particularly useful to filter novel stimuli from background noise, e.g., by adaptation that is specific for different frequencies of sound. However, such inputspecific adaptation might be more difficult to implement in motion vision than in auditory processing, where different frequencies can be separately processed from early on in the system. An interesting alternative mechanism to accentuate abrupt changes in an input signal was recently demonstrated by a computational network model of mammalian visual cortex. 16 In this model presynaptic spike-frequency adaptation was combined with synaptic short-term depression. The postsynaptic neurons decreased their activity during tonic activation of the network, but they were still able to respond strongly whenever the input current given to the presynaptic neurons was changed abruptly. In the fly visual system, the cellular basis of adaptation is largely unknown. However, similar to the mechanisms underlying spike-frequency adaptation in mammalian visual cortex, an activity-dependent conductance that is activated by sustained excitatory stimulation has been demonstrated in fly TCs. [17][18][19] In how far is the processing of natural visual input affected by an accentuation of stimulus discontinuities in the course of adaptation, or "novelty detection"? In a recent study retinal image sequences as seen during flight were reconstructed from the flight trajectory and replayed during recordings from TCs. 20 Repeated presentation of these natural image sequences caused a strong decline in the neuronal response. However, when virtual objects were added to the image sequences, the object-induced responses remained much higher than the responses elicited by pure background motion. This result implies that motion adaptation can enhance the detectability of objects, which elicit a prominent discontinuity in image flow during flight. Studies that address in more detail how the dynamics of motion adaptation interact with the complex spatio-temporal profile of natural visual stimuli may in the future help understand the functional benefits of adaptation under real-life conditions.
2016-10-11T02:19:10.865Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "d3c12b00f1e3cb58f722f4a6a87263df511e92da", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/cib.2.1.7350?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "cc27495e24b06016303c7179f7ae02e39b53fabb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
135260332
pes2o/s2orc
v3-fos-license
Interactive comment on “ Technical note : Analysis of observation uncertainty for flood assimilation and forecasting ” The manuscript presents an analysis of the uncertainty of remote sensing-derived water level observations used for the real time constraint of flood forecast models. Specifically, the diagnostic approach presented by Desroziers et al. (2005) is applied to assess the uncertainty of spatially distributed water level observations derived from a sequence of seven high resolution SAR images acquired during the November 2012 flood event in the lower Severn and Avon rivers (UK). The hydraulic model is LisfloodFP and the data assimilation scheme is the Local Ensemble Transform Kalman Filter. Extraction of remote sensing-derived water level values, model set-up and data assimilation scheme were presented in previous publications (respectively, Mason et al. 2012 and Garcia-Pintado et al., 2015). This manuscript firstly provides a summary of the diagnostic approach presented by Desroziers et al. (2005) and then the description of Introduction In data assimilation (DA), observations are combined with numerical model output, known as the background, to provide an accurate description of the current state, known as the analysis.In DA the contributions from the background and observations are weighted according to their relative uncertainty.In the assimilation, both the instrument error and representation error contribute to the observation error (Janjić et al., 2017), which may be correlated and state dependent (Waller et al., 2014;Hodyss and Nichols, 2015).In DA, observation error statistics are typically assumed to be uncorrelated.The data density is reduced in order to satisfy this assumption (Lorenc, 1981).Having adequate estimates of these uncertainties is crucial in order to obtain an accurate analysis.In numerical weather prediction (NWP) DA systems the use of full observation error correlation matrices has lead to the inclusion of additional observation information content (Stewart et al., 2008).This results in a more accurate analysis and improvements in objectively measured forecast skill (Weston et al., 2014;Bormann et al., 2016).Furthermore, an understanding of the observation uncertainties can provide insight into which observations are most useful to assimilate. The development of DA systems has largely been driven by its use in NWP, but the methodologies are applicable to any system that can be modelled and observed.There have been recent advances in real-time 2D hydrodynamic modeling and the acquisition and processing of relevant remote sensing observations (earth observations, EOs) (Raclot, 2006;Andreadis et al., 2007;Schumann et al., 2007Schumann et al., , 2011;;Mason et al., 2010aMason et al., , 2012Mason et al., , 2014)).Consequently, several studies have shown the benefit of applying DA to operational flood forecasting (Durand et al., 2008(Durand et al., , 2014;;Montanari et al., 2009;Roux and Dartus, 2008;Neal et al., 2009;Matgen et al., 2010;Mason et al., 2010b;Giustarini et al., 2011;García-Pintado et al., 2013, 2015).Grimaldi et al. (2016) review the potential of EOs for inundation mapping and water level estimation and their use for calibration, validation and constraint of real-time hydraulic flood forecasting models.A predominant EO technique to obtain WLOs is Synthetic Aperture Radar (SAR).SAR provides high-resolution observations of radar backscatter which, after processing, serve to delineate the flood extent.Then, the intersection of the flood extent with a high-resolution LiDAR Digital Terrain Model is used to obtain the WLOs. A common characteristic for these EOs, is that they need to be subjected to strict quality control (QC) procedures, if they are to be unbiased.The QC, for example, may reject observations in vegetated areas or in other conditions depending on its application.As a result the observed flood extent is discontinuous in space.Nevertheless, these discontinuous observations may be rather dense and the errors in the observations may be highly correlated.A direct assimilation of this dense dataset would lead to an analysis biased towards the observations and, for covariance-evolving methods (e.g., ensemble Kalman filters), an over reduced posterior covariance and unstable long-term forecast/assimilation cycles.Thus, to avoid dealing with the spatial correlation in the assimilation, the current approach is to further thin the data, as is standard in other assimilation applications, typically retaining approximately 1% of the pre-thinned observations.The result is that the dataset to be assimilated is a sparse field of clustered observations.Irrespective of the processing method, the full correlated observation error statistics are unknown.In the field of hydrological forecasting, one scenario that would benefit from improved understanding of the observation uncertainties is the assimilation of satellite-derived surface soil moisture (SSM) into catchment-scale rainfall-runoff models (Cenci et al., 2016;Mason et al., 2016).Another, which is receiving great attention, is the assimilation of the satellite-derived water level observations (WLOs) for either operational flood forecast or hindcast analyses (Mason et al., 2010a;García-Pintado et al., 2013).A more detailed understanding of the observation uncertainties would be highly useful as they can inform the thinning strategy and suggest which observations may benefit the assimilation most (Fowler et al., 2017).Additionally, understanding the error statistics may permit more observations to be included in the assimilation, which should allow the information from dense observation sets to be fully exploited.There is a clear potential to improve the flood forecast if all the SAR WLOs could be assimilated in an appropriate way. Observation uncertainties cannot be computed directly, instead they must be statistically estimated.Desroziers et al. (2005) provide a diagnostic to estimate observation uncertainties using the statistical average of observation-minus-background and observation-minus-analysis residuals.The diagnostic has been applied to operational NWP settings to estimate observation uncertainties with good results (Weston et al., 2014;Waller et al., 2016a, c;Bormann et al., 2016;Cordoba et al., 2017).In this manuscript we use the diagnostic of Desroziers et al. (2005), described in Section 2, to estimate the observation error statistics for SAR WLOs that are assimilated using a Local Ensemble Transform Kalman Filter (LETKF) into the LISFLOOD-FP 2D hydrodynamic model.For this study, we use a sequence of real SAR overpasses in a flood event that occurred in November 2012 in SW England.A description of the SAR WLOs and experimental design are given in Section 3. Results are discussed in Section 4. First, we estimate average WLO error statistics across the entire domain for the duration of the flood event.It will be seen later that these globally estimated error statistics show an anomalous pattern.Thus, we then consider if these error statistics vary across the domain or for different phases of the flood.From the results we infer that the anomalous pattern is related not related to the distribution of observations over the domain, but to observations during the later stages of the flood.Importantly, we show that the diagnostic of Desroziers et al. (2005) can be used to identify anomalous observation datasets that are not suitable for assimilation. 2 The diagnostic of Desroziers et al. (2005) Data assimilation is a technique used to provide an analysis, x a ∈ R N m , the best estimate of the current state of a dynamical system.The analysis is determined by combining the background x b ∈ R N m , a model prediction, with observations, y ∈ R N p , weighted by their respective error statistics.Here the dimensions of the observation and model state vectors are denoted by N p and N m , respectively.To combine the information from the observations and background it is necessary to project the background into observation space using the observation operator, H : R N p → R N m which may be non-linear.The analysis can be used to initialise a forecast which in turn provides a background for the next assimilation. In Desroziers et al. (2005) the analysis is calculated using where R ∈ R N p ×N p and B ∈ R N m ×N m are the observation and background error covariance matrices, K is the Kalman gain matrix and H is the linearised observation operator, linearised about the background state. The observation error covariance matrix can be estimated using the observation-minus-background, d o b = y − H(x b ), and observation-minus-analysis, d o a = y − H(x a ), residuals (Desroziers et al., 2005).Assuming that the observation and background errors are mutually uncorrelated, the statistical expectation of the product of the analysis and background residuals results in As the resulting matrix is estimated statistically it will not be symmetric.Therefore, it must be symmetrised before it can be used in a data assimilation scheme . The form of the diagnostic in Eq. ( 2) is not suitable to calculate observation error statistics when each assimilation cycle uses different observations.Instead components of the background and analysis residuals must be paired and binned, with the binning dependent on the type of correlation being estimated.For example, when calculating spatial correlations the bins may depend on the distance between observations, where as for temporal correlations the bins would depend on the time between observations.For each bin, β, the covariance, cov(β) is then computed individually using, where d oa i d ob j k is the k th pair of elements of d o a and d o b in bin β, and N β is the number of residual pairs in bin β.The second term of equation ( 3) ensures that the computation of the observation error statistics is not affected by bias (Waller et al., 2016a). The diagnostic in Eqs. ( 2) and (3) only gives a correct estimate of the observation error uncertainties if the error statistics used in the assimilation are exact.Even if the assumed statistics are not exact the diagnostic can still provide useful information about the true observation error statistics (Waller et al., 2016b;Ménard, 2016).Further limitations include the use of an ergodic assumption in order to obtain sufficient samples (Todling, 2015) and the assumption that the observation operator is linear (Terasaki and Miyoshi, 2014). One further issue is that the standard diagnostic is derived assuming that the analysis is calculated using minimum variance linear statistical estimation.If local ensemble DA is used to determine the analysis, the diagnostic does not result in a correct estimate of the observation uncertainties.However, by using a modified version of the diagnostic some of the observation error statistics may be estimated.It is possible to estimate the error correlations between two observations if the observation operator that determines the model equivalent of observation y i acts only on states that have been updated using the observation y j (Waller et al., 2017).Since we use a LETKF assimilation scheme in this study, we must take this into account when estimating observation error statistics for the WLOs. Experimental Design This study makes use of the observation, model and assimilation system described in García-Pintado et al. (2015).We direct the reader to this reference, and references therein, for a detailed description of the derivation of WLOs and the assimilation design.Here we provide a description of the data used specifically in this study. We estimate observation uncertainties for observations from a real flood event that occurred in the South West United Kingdom on a 30.6km × 49.8km (1 524 km 2 ) area of the lower Severn and Avon rivers in November 2012.The WLOs were extracted from a sequence of seven satellite SAR observations (acquired by the COSMO-SkyMed constellation) using the method described in Mason et al. (2012).The WLOs are available daily for the period 27 th November to the 4 th December 2012 (with the exception of December 3 rd ).Observations on the first day illustrate the flood levels just before the flood peak in the Severn.On 30 th November the river went back in bank; however, a substantial amount of water remained on the floodplain (see Fig. 2 in García-Pintado et al., 2015). When presented with an unlikely observation, such an observation with a gross error measurement error or one which reflects a rare fluctuation in the dynamics of the system (Vanden-Eijnden and Weare, 2013), data assimilation techniques can lose accuracy.Thus, before being assimilated, the observations are subjected to several QC protocols according to the physical The observations are assimilated into a 75m resolution LISFLOOD-FP flood simulation model (Bates and Roo, 2000) using a LETKF (Hunt et al., 2007).Due to the formulation with the diagnostic described in Section 2, the localization in the LETKF is set in standard 2D Euclidean space rather than the physically based distance along the river channel described in García-Pintado et al. ( 2015), which would require a further adaptation of the diagnostic calculation.The localization radius is set using a compactly supported 5 th order piecewise rational function (Gaspari and Cohn, 1999, Eq. 4.10) with length scale 20km. We apply the diagnostic of Desroziers et al. (2005) to the observation-minus-background and observation-minus-analysis residuals resulting from the flood assimilation.We first estimate average horizontal error covariances across the entire domain for the duration of the flood event.We then consider if these error statistics vary across the domain or for different phases of the flood.For all cases the observation error correlations are calculated at a 1km resolution.Due to issues with the diagnostic (discussed at the end of section 2), we do not consider any observation pairs with a separation distance greater than 19km. When evaluating the correlations we assume that they become insignificant when they drop below 0.2 (Liu and Rabier, 2002). For this assimilation system we assume that the ensemble background error covariance matrix gives a reasonable estimate of the true background error statistics.The assumed standard deviation for the WLOs is 59cm; however, this does not account for the error of representation and, therefore, may be an underestimate of the true error standard deviation.As is typical for most DA systems, the observation errors are assumed uncorrelated.With these assumed error statistics the theoretical work of Waller et al. (2016b) suggests that the observation error statistics estimated using the diagnostic will have: -An underestimated standard deviation. -An underestimated correlation length-scale Therefore when considering our results, we would expect the true standard deviations and length scales to be larger than those we estimate. Average observation error statistics We first estimate average horizontal error covariances across the entire domain for the duration of the flood event.We plot in Fig. 2 the estimated correlation, along with the number of samples used, for the WLOs. The estimated statistics give a standard deviation of 54cm, this is slightly lower that the assumed error standard deviation of 59cm.Following the theory of Waller et al. (2016b) we expect the estimated standard deviation to be an underestimate of the true observation error standard deviation and hence the results suggest that the assumed standard deviation is likely set at the correct level. Our results show that the correlations become insignificant (< 0.2) at approximately 8km, but there is some unexpected behavior before 8km.The correlations drop smoothly between 0-4km then increase again up to 6km before dropping off. This behaviour is seen for a variety of different binning widths (not shown).We investigate the cause of this 'shoulder' in the estimated correlations in Sections 4.2 and 4.3.In general we find that the correlation distance is much longer than the thinning distance of 125m which was chosen to try to ensure that the observation errors are uncorrelated.Furthermore, theoretical results of Waller et al. (2016b) suggest that, with this design of assimilation experiment, the correlation length scales will be underestimated. Correlations in different parts of the domain One possible cause of the shoulder in the correlations is the river structure.It is possible that observations on different tributaries of the river are resulting in the increase in correlations.To test this hypothesis we split the domain in two (as shown in Fig. 1); the western domain covering the river Severn and eastern domain covering the river Avon.We plot the estimated correlations, along with the number of samples used for the SAR WLOs, for the western part of the domain in Fig. 3 and for the eastern part of the domain in Fig. 4. We note that there are fewer observations in the eastern domain, and therefore the results are subject 10 to greater sampling error.From Fig. s 3 and 4 we see that the 'shoulder' in the correlations is still present in both parts of the domain.In the eastern domain it is very pronounced.This suggests that the cause of the increase in correlations between 4-6km is not observations on different tributaries of the river. Correlations at different times We next consider if the correlation structure changes over time.We plot in Fig. s 5, 6 and 7 the correlations calculated for the first three days, the second two days and the final two days respectively.We see that at the beginning of the flood period, the observations have similar standard deviations to those estimated for the entire flood event; however, the correlation length scale is short, approximately 2km.During the middle of the flood event the observation error standard deviation decreases and the correlation length scale increases slightly.For the final two days the river is back in bank; for this period the standard deviation is largest, as is the correlation length scale, which is approximately 8km.It is also in this final period where the 'shoulder' appears in the correlations. In the recession stages of the flood, the days indicated in Fig. 7, the flow was back in bank.Thus an increasingly high proportion of the observations were in areas which remained flooded but were disconnected from the main river flow.For this same sequence of SAR overpasses García-Pintado et al. (2015) showed that the assimilation of the last three overpasses was still able to exploit the background ensemble covariances to pass some of the information from these WLOs to the main flow.However, two effects became evident: a) the assimilation increments to keep the forecast on track, despite being in the correct direction, were of a smaller magnitude (thus, not so effective) in these last stages, and b) the corrections to the flow in these last stages were gradually more short-lived.This was a result of the reduced information content in these WLOs regarding the inflow errors upstream, which in the end control the flood and flow evolution.Here the Desroziers et al. (2005) diagnostic has been able to identify a corresponding anomalous structure in the WLO errors at these last stages.The correlation structure showed in Fig. 7 indicates that apart from the longer correlation errors, which can be expected from the smoother flood dynamics at the end of the flood, an increase in the correlation appears at ∼ 6km.The increasing disconnection of the Conclusions We have shown that the Desroziers et al. (2005) diagnostic is a useful tool to identify the error covariance in WLOs from satellite SAR.Further, the diagnostic has been able, in the case study, to isolate an unexpected anomaly in the correlation structure, Estimated error standard deviation 57cm. Author contributions.JW, JG-P and DM prepared the data and ran the experiments.JW and JG-P analyzed the results and drafted the manuscript.DM, SD and NN contributed to the analysis, discussion, and manuscript editing. Competing interests.The authors declare that they have no conflict of interest. Nichols was also supported by the UK NERC National Centre for Earth Observation (NCEO).J. García-Pintado, D. C. Mason and S. L. Dance were supported by UK NERC grants NE/1005242/1 (DEMON) and NE/K00896X/1 (SINATRA).The data used in this study may be obtained on request, subject to licensing conditions, by contacting the corresponding author. Hydrol.Earth Syst.Sci.Discuss., https://doi.org/10.5194/hess-2018-43Manuscript under review for journal Hydrol.Earth Syst.Sci. Discussion started: 1 February 2018 c Author(s) 2018.CC BY 4.0 License.characteristics of the terrain and land cover.The data are then thinned to a separation distance at which the observation errors are assumed uncorrelated.Typically the data is thinned to a separation distance of 250m.However, in this study a denser observation set (although still sparse) with thinning distance of 125 m is used, in which some spatial correlation should remain.The measured standard deviation for the WLOs is 59cm; this is calculated by fitting a plane by linear regression to the WLOs, which we consider adequate in this case as the floodplain in the downstream observed areas is reasonably flat.The location of 5 the observations is plotted in Fig. 1. 3 Figure 2 . Figure 2.Estimated SAR WLO error correlations (black line) and number of samples (bars) used for the calculation.Estimated error standard deviation 54cm.
2019-01-02T18:32:52.172Z
2018-02-01T00:00:00.000
{ "year": 2018, "sha1": "eecfe5d6a28232e7c8dd09621f50be08fd98eda6", "oa_license": "CCBY", "oa_url": "https://www.hydrol-earth-syst-sci.net/22/3983/2018/hess-22-3983-2018.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "eecfe5d6a28232e7c8dd09621f50be08fd98eda6", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
247650162
pes2o/s2orc
v3-fos-license
Electron-Microscopy Investigations of the Warping Effect in Pigmented High Density Polyethylene 'Diketopyrrolopyrroles' (DPP) are industrially important red pigments which resist bleaching. Thionation (replacement of the 0by the S-atom; the title compound (DTPP» brings about an intense near-IR absorption in the solid state. Because of this, DTPP has recently attracted attention as a material for laser printers and optical information-storage systems based on GaAsAI laser diodes. Three crystal modifications I, II, and III are known to exist for DTPP. Among these, only modification III exhibits an intense near-IR absorption (860 nm), whereas the absorption band appears only at 690-710 nm in modifications I and II. The near-IR absorption is due to interplanar n-n interactions of the alternating N-atom of one DTPP molecule and thiocarbonyl C-atom of the neighboring molecule along the stacking axis. The formation or blocking of the n-ninteraction ('phase change') by means of optical energy can be applied to an optical information storage system. The optical disk (structure: substrate/hydrazone/DTPP/ AI) exhibits a reflectivity change from ca. 30 to 45% on writing with a power of ca. 9 mW at 780 nm. 1. Einleitung 'Diketopyrrolopyrrole' (DPP) sind bereits als wichtige industrielle Rotpigmente bekannt. Schwefeillng des unsllbstituierten DPPs (' Dithioketopyrrolopyrrol': DTPP, Formelbild in Fig. 1) bringt eine intensive, nah-IR-Absorption zustande. Die optischen Eigenschaften von DTPP im Festzustand sind ausfUhrlich untersucht worden, besonders im Zusammenhang mit Anwendungen fUrLaserdrucker [ 1-41und Informationsspeichersysteme [51 basierend auf der GaAsAI-Laserdiode. Diese Anwendungen beruhen auf del' nah-IRAbsorption von DTPP. Wir haben bereits berichtet, dass es drei Kristallmodifikationen in DTPP gibt, unter denen nur die Modifikation III eine intensive, nah-IRAbsorption bei ca. 860 nm aufweist [6-8J. Ferner sind dieelektronischen Eigenschaften von DTPP in del' Lasung sowie im FestZllstand sowohl vom Standpunkt internlOlekularerWechselwirkungen [91wie auch der Molekular-Orbital-Berechnungen [10] systematisch untersucht worden. *Korrespolldenz.: PO Dr. J. MizllgllChi Ciba-Geigy AG Forschllngszentrllm Marly Materialforschung CH-I723 Marly like pigments are likely to modify crystallization and crystallization rate, producing distortion in the injected article. Inorganic pigments are particularly good in the coloration of polyethylene articles; they do not produce any deformation, because their chemical constitution and specially their surface properties have no influence on the mechanism of crystallization, and they have a good heat resistance. However, an important drawback appears at use: the high saturated colorslike red and yellow -can only be obtained with cadmium pigments. Ecological requirements recommend avoidance of their use. Their replacement by organic pigments, having the required stabilities, was always hindered by the warping problemwhich is observed in a majority of organic pigments. Testing The efficiency of the treatment on the warping behavior is tested by injection of high-density polyethylene (HDPE) plaques. The pigmented plate is compared to a colorless one; typically, a warped plate shows a decrease in its length (OL) compensated by an increased width (OW). The Table gives the effect of two treatments on the commercial pigment Irgazin DPP® RedBO. The treatment with inorganic material gives the expected results; the pigment is The important difference between inorganic and organics pigments lies in their surface: inorganics have a polar hydrophilic surface; high-performance organics are mostly lipophilic and apolar. A possible method to avoid the nucleating effect is to simulate the inorganic surface on the organic pigments by modification of the surface [4]. Typical treatments involve the precipitation on the sUlface of the organic particles of a thin layer of a metal oxide (e.g. zirconia, silica, or alumina) [5]. An another possible way to efficiently change the polarity of the pigment is to adsorb or to precipitate on the pigment surface some polar polymers like poly(vinyl alcohol), poly(acrylate), orcellulose derivatives [6]. There are several theories explaining why pigments produce distortion in polyolefins, the most widely accepted one involves the nucleating effect that some pigments have on semi-crystalline polymers such as polyethylene. After the injection, the polymer crystallizes very rapidly, during cooling. Temperature gradients which are present in the mould can produce different rates of crystallization and result in strains within the cooled material. The crystallization is mainly induced by seeding, but could also be induced by epitaxial growth on a crystalline surface [1][2][3]. Thus, crystalline particles The coloration of bottle crates is a very interesting market for a pigment producer. Millions of crates with a lifetime of ca. ten years are produced each year all around the world. Apart from normal high-performance properties, like lightfastness, weatherfastness, and a heat resistance (up to 270°), the pigment needs a specific property to be used in high-density polyethylene application: it must not influence the crystallization process of the polymer. Such an influence can cause shrinkage or warping, showing as a deformation of the injected article and deterioration of the mechanical properties, which can lead to a still not warp-free -the encapsulation with inorganics is certainly not homogenous enough, and could also be destroyed during the dispersion step. Conversely, the treatment with the polymer shows a very good improvement of the warping behavior. These type of values represent an optimum, for dispersion reasons it is certainly not possible to go further -new naked faces are formed, when small aggregates are broken during the dispersion process. Electron-Microscopy Investigations of the Warping Effect in Pigmented High Density Polyethylene The nucleating effect of the pigment is also regularly assessed by calorimetry and by optical microscopy. In the first test, the crystallinity and the crystallization rate of extruded HDPE sample containing 3.5% pigment are measured by DSC. Tn the selection of pigments discussed here, the measured differences are too small to be representati ve. The second test is a recrystallization at isothermal temperature. The observation of this process between two glass plates under an optical microscope does not give much more information. Normally, a non-warping pigment produces large and regular spherulites. In contrast, a warping pigment results in a very fine crystallization with irregular and numerous spherulites. In this case, untreated, as well as treated pigments present surprisingly similar crystallization behavior. All three samples show very fine crystalline structure and cannot be differentiated. Scanning Electron Microscopy The poordiscrimination observed with these traditional investigations has led us to look at the microscopical level in order to study the area arround of the pigment particle as well as the pigment-polyethylene interactions. Samples and Preparation The high-density polyethylene used in this study was a Stamilan 9089Vex Hiils. The pigments and the HDPE pellets were first mixed (3.5% w/w) in a Turhula-mixer and then extruded (dimension: 15 mm x I mm) three times for a better dispersion. The maximum extrusion temperature was 200°. A 2-cm long piece of the extruded bar was then frozen to liquid N 2 temperature and fractured. One fractured face of each sample was etched for 4 h with an acid permanganate solution (0.7% KMn04 solution in I: I mixture of 98% H 2 S0 4 and 83% phosphoric acid) [7]. The other face was kept untouched. Both were observed simultaneously. The sample preparation was completed by a sputtered coating of the sample surface with IO-nm Au/Pd film in order to avoid electrical charging in the SEM. Samples were observed with a scanning electron microscope Philips 525-M equipped with an EDX TRACOR TN5400. Results The observation of the effect of the different pigments on the polymer structure requires that the pigment particle themselves could be located with precision. For the inorganic pigments, it is not that much of a problem, because the difference in average atomic number between the polymer and the pigment is high enough to bui Id a good contrast between the two. For organic pigments, however, there is very little difference, and hence very little contrast between matrix and particle. By efficent use ofthe SEM-ED X (Scann ing Electron Microscope coupled with an Energy Dispersive X-ray spectrometer) combination, we were able to find the pigment particles by localization of the X-ray emis-sion of the Cl atoms contained in the pigments. The resuhs obtained are shown in Figs. 2-4. Fig. 2A depicts the behavior of a CdS pigment particle in the HPDE. The hydrophilic surface being totally incompatible with the polymer, the pigment particle is found sitting in a cavity built around it by the polymer. There is clearly no interaction between the pigment and the polymer. In Fig. 28, the pigment is an untreated DPP with a hydrophobic surface, and, therefore, hence polymer-compatible. The capability of the pigment particle to blend into the polymer is well illustrated here. Only the X-ray CI emission allowed us to identify the pigment position with precision. Tn Fig. 2C, the same pigment as in Fig. 28 was used, but its surface has been treated in order to render it hydrophilic. The situation is then comparable with Fig. 2A. The organic pigment that does not cause warping is treated like a foreign particle by the polymer and segregated outside the structure. Fig. 3 illustrates the same effect observed on the CrTi0 4 -an inorganic pigment. Fig. 3A shows the original pigment causing no warping. It behaves much like the CdS of Fig. 2A or the treated organic pigment of Fig. 2C. However. when its surface is chemically treated to make it hydrophobic, it blends in the polymer as it can be observed on Fig. 38. In this case, the finished parts are warped. In Fig. 4, the effect on the polymer structure can be clearly seen. Fig. 48 shows a chemicaJly etched sample containing the same pigmem as Fig. 28. Fig. 4A shows the structure of the pigmentless reference sample. One can see that the crystalline structure of the reference is well resolved whereas in the pigmented sample, which exhibited strong warping. only very fine lamellae could be distinguished in a few areas (arrows). The structure of non-warped pigmented sample is not shown here. It is very simililar as the one of Fig. 4A only mLlch finer. Finally, we observed that the addition of pigment in HDPE changes its microstructure in all cases. The pigments, where the warping is the most dramatic are also the ones where the structural changes are the most noticeable. This ability to strongly disturb the polymer's crystalline strLlc-ture is due to the surface characteristics of the pigment particles. The hydrophobic, hence polymer-compatible, pigments play an active role in the structure development, whereas the hydrophilic ones have a typical impurity effect namely, they cause no observable structure modification. Conclusions The true explanation for the pigment warping is still unclear: not one of the different methods of analysis used has gi ven the real evidence necessary to prove the mechanism of warping. However, the DSC results and the observation by optical microscopy indicate that this effect cannot be explained by the nucleation theory alone. The results obtained by the SEM and by other testing methods di rect us to a new theory where the pigment compatibility is essential. If a pigment is compatible with the polymer (by analogy: the polymer wets the pigment), the polymerchains will have strong interactions with the pigment surface and the pigment particles will be linked (tied) together by the polymer chains to form a composite matrix. These interactions restrain the chain movement, meaning that mechanical relaxations are hindered, producing the warping (as with the untreated organic pigment). Conversely, incompatible pigments as inorganic pigments (CdS or CrTi0 4 ) present no interaction with the polymer; the pigment particles and the polymer can behave as two separate phases, and mechanical relaxations after the processing are possible. In all the methods investigated, the measured effect produced by the warping is small (e.g. the degree of crystallinity, the kinetics of crystallization, the change of yield etc.). This makes it difficult to understand fully the origin of warping, and it appears impossible to progress further in the study of warping with the analytical techniques used until now (SEM, TEM, DSC etc.). From the point of view of the pigments' producer, even if the true explana-tion of the warping was not definitely established, a good solution to the problem was found, and a commercial product is actually on the market. The treated product iscommercialized under the trade name Cromophtal DPP® Red SOc.
2022-03-25T15:24:42.055Z
1994-09-28T00:00:00.000
{ "year": 1994, "sha1": "04465b0ffbf30be1aea4a088aeb304559f0bd1a9", "oa_license": "CCBYNC", "oa_url": "https://www.chimia.ch/chimia/article/download/1994_436/1715", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0727c369527fd02858c428b846aafa4f339d3283", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
86276088
pes2o/s2orc
v3-fos-license
Receptor-mediated biliary transport of immunoglobulin A and asialoglycoprotein: sorting and missorting of ligands revealed by two radiolabeling methods. In the rat, all receptor-bindable immunoglobulin A (IgA), and 1-4% of injected asialoglycoprotein (ASG), are transported from blood to bile intact. The major fraction of the ASG is degraded in hepatic lysosomes. The study described here was designed to elucidate the sorting that occurs in hepatocytes subsequent to receptor binding of ligands not sharing the same fate. We show that conjugation of protein with the Bolton and Hunter reagent can be used as a probe for the lysosomal pathway, since 50% of the reagent is released into bile after lysosomal degradation of internalized protein. Radiolabeling by iodine monochloride was alternatively used to follow the direct pathways that deliver intact IgA and ASG to bile. After intravenous injection of labeled proteins, first intact ASG and IgA, and then radioactive catabolites from degraded protein, were released into bile. No proteolytic intermediates were detected, and the transport of IgA or ASG directly to bile was not affected by the lysosomal protease inhibitor leupeptin. These observations indicate that divergence of the direct biliary transport pathways from the degradation pathway occurs at a stage preceding delivery to lysosomes, possibly at the cell surface. Competition studies showed that all three pathways (including the biliary transport of intact ASG) are receptor mediated, but even at supersaturating doses the uptake and processing of IgA and ASG occur independently. We propose that IgA and ASG receptors are not frequently in juxtaposition on the plasma membrane, but that ASG, after binding to its receptor, is occasionally missorted into the biliary transport pool. Proteins endocytosed by liver parenchymal cells can be processed in one of three ways: they can be transported to bile, returned to blood, or transferred intracellularly to lysosomes, where they are degraded. Clearly, a fundamental question in cell biology is how and where the cell separates proteins with different ultimate destinations : at the cell surface before endocytosis, or in an intracellular compartment . Proteins using different pathways in the liver have not yet been studied together in the same experimental system . We have chosen to investigate the quantitative direct biliary transport of polymeric immunoglobulin A (IgA)' (1-3) and the uptake into the degradative pathway ofasialoglycoproteins (ASG) (4), and we describe biochemical probes that permit the simultaneous 'Abbreviations used in this paper: ASFet, asialofetuin ; ASG, asialoglycoprotein(s) ; ASOr, asialo-orosomucoid ; BH, Bolton and Hunter (reagent) ; IgA, immunoglobulin A; SC, secretory component . THE examination of both pathways. The choice of these ligands is ideal, because in the rat, short-term clearance ofboth proteins is the exclusive responsibility ofliver hepatocytes (4)(5)(6)(7)(8). Relevant aspects of these pathways previously described are as follows : IgA binds to a high-affinity receptor specific for polymeric immunoglobulins on the hepatocyte sinusoidal membrane (9,10) . This receptor is identical to the secretory component (SC) found attached to transported IgA, except that it has an additional sequence, presumably to anchor the binding site in the lipid bilayer (11,12) . The ligand-receptor complex is endocytosed into 100-nm vesicles which then migrate to the bile canaliculus (13,14), whereupon secretory IgA is released intact into bile, leaving the cleaved hydrophobic portion of the receptor in the canalicular membrane. Transport and biliary release of the receptor is not dependent on the presence of ligand (15) and both ligand and receptor are univalent (16,17), implying that bridging of receptors by the ligand is not required for uptake of IgA . It has been postulated that the physiological role of this process is to salvage IgA that has been synthesized locally at the mucosa but that has escaped into lymph (18). Mechanistically, the initial steps in the processing of ASG are similar: ASG binds to a high-affinity transmembrane receptor that recognizes terminal galactose residues, and is endocytosed through a coated pit as a ligand-receptor complex, into a small vesicle (19). Ongoing endocytosis of ASG receptors also appears to occur in the absence of ligand (20,21). However, unlike IgA-SC complexes, the ligand dissociates from its receptor shortly after endocytosis, the receptor is recycled back to the plasma membrane, and the ligand is transferred to lysosomes for degradation (22)(23)(24)(25)(26). Recently, Geuze et al. (27) have shown that ASG is separated from its receptor in an intracellular compartment subjacent to the plasma membrane. Differential binding and processing of ASG depending on concentration and glycan structure suggests that bridging by ligand of receptor binding sites may play a role in the uptake and intracellular routing of receptorbound protein (28)(29)(30)(31)(32) . The physiological relevance of ASG processing is unknown, but it could play a role in clearing senescent plasma proteins which have lost sialic acid from their oligosaccharide chains . In this paper, we make use of two radiolabeling methods to monitor endocytosis and subsequent intracellular processing. Labeling with the Bolton and Hunter (BH) reagent provides a novel and unique approach for the study of the lysosomal degradation pathway simultaneously with the direct biliary transport of IgA, and has facilitated a detailed analysis of the independence of the two processes . In addition, we found in the course of this study that a small proportion of injected ASG appears in bile as intact protein. This has been largely ignored in the past, mainly because morphological techniques do not lend themselves well to the study of minor pathways. We include a detailed biochemical examination of this phenomenon here, because we observed that biliary transport of ASG is also receptor mediated, and this has important implications for the level of precision in the sorting process. MATERIALS AND METHODS Proteins: Rat monoclonal dimeric IgA was isolated from ascites fluid of the plasmacytoma lines IR699 and IR22, kindly provided by H . Bazin of the University of Louvain, Brussels, by ammonium sulfate and octanoic acid precipitation (33). Human IgAI was isolated from myeloma sera in a similar fashion . The polymeric fractions of all IgA preparations were recovered by subsequent chromatography on Sepharose 6B (Pharmacia Fine Chemicals, Uppsala, Sweden ; 2 .6 x 90 cm) . Human IgG from myeloma serum was purified using ammonium sulfate and octanoic acid and chromatographed on Sephadex G-200 (Pharmacia Fine Chemicals ; 2.6 x 90 cm) . Human albumin was obtained from the Hoechst Pharmaceutical Co., Kansas City, KS . Samples of human orosomucoid (a,-acid glycoprotein), prepared by the method of Hao and Wickerhauser (34), were generous contributions from H . Schachter of the University of Toronto, and from the American Red Cross Blood Services Laboratory, Bethesda. Human fetuin was purchased from the Sigma Chemical Co., St . Louis . ASOr and ASFet were prepared by mild acid hydrolysis (35): Solutions of the protein in water (orosomucoid: 4 mg ml-' ; fetuin : 75 mg ml-') were heated to 80°C and then made 0.1 N in H2SO4. After 45 min, the hydrolysis was stopped by the addition of two equivalents of solid Tris; the solution was cooled and then dialysed into PBS (0 .15 M NaCl, l mM potassium phosphate, pH 7 .4) . Chromatography on Sepharose 6B revealed that aggregation or fragmentation of the proteins as a result of this procedure was <5% . The remaining hydrolysable sialic acid measured by the thiobarbituric assay (36) was <I% for orosomucoid and 1-7% for fetuin . Inhibition experiments described in Fig. 7 were carried out using both ASFet prepared by acid hydrolysis and ASFet prepared with neuraminidase: 400 mg of fetuin in 5 ml of 0 .1 M acetate buffer, pH 5 .6, was incubated at 37°C for 20 80 THE JOURNAL OF CELL BIOLOGY " VOLUME 98, 1984 h with 0 .5 ml ofagarose-linked neuraminidase, purchased from Sigma Chemical Co . Treatment with solid-phase neuraminidase resulted in the release of 54% of the total hydrolysable sialic acid. These two preparations were shown to compete in a reciprocal fashion for hepatic uptake and processing . Radiolabeling and Preparation of Samples for Injection: Immunoglobulin and albumin preparations were treated with 25 mM iodoacetamide for 30 min in PBS (pH 7.4) at room temperature to remove reducing activity. Proteins were labeled using iodine monochloride prepared in our laboratory (37), at a final substitution ratio of less than one atom per molecule protein . Free radioiodide was removed by exhaustive dialysis against PBS containing 0.075% Nal, and then PBS alone. Radioactivity in preparations of protein labeled using ICI was >98.5% precipitable in 15% trichloroacetic acid ; the specific activity was typically 10'-10'°cpm mg-L. BH reagent was prepared according to published methods (38), or purchased from Amersham Corp., Arlington Heights, IL. On the day preceding a transport experiment, 10-20 u1 of reagent (2 mCi/ml benzene) was dried in a small test tube, and 0.1-0.5 mg of protein in 0.2 ml of 1 M Tris buffer, pH 8 .0, was added. Labeling proceeded at 0°C for 30 min with periodic mixing. Preparations were dialysed overnight against PBS containing 0.1 % glycine and then twice against PBS alone. Within 3 h of injection, the protein was passed over a Sephadex G-50 column (0 .5 x 12 cm) to remove any trace of free reagent, and an aliquot was saved as a standard. The specific activity was 10'-10, cpm mg' . IgA or ASG labeled using the BH reagent showed <20% radioactive decomposition upon incubation in plasma for 8 h at 38°C, or in bile or serum for 1 d at room temperature or 8 d at 4°C, when assessed by gel filtration. For unusual occasions where standard aliquots of stock labeled material showed >I 0% decomposition at the time when bile samples were to be analysed chromatographically, the clearance and transport data were excluded from this study. Between 10 4 and 106 cpm of "5 1-and "'I-labeled proteins were mixed in 200 kl of PBS and precounted before injection into each animal . This corresponded to 0.01-100 ag, depending on the method of labeling employed . Data were corrected by computer for isotopic overlap and radioactive decay . Clearance and Transport Experiments : Male Wistar rats (250-350 g) were purchased from Charles River Canada, St. Constant, Quebec, and maintained on a synthetic Basal Rat Diet, produced by Bioserv Inc., Frenchtown, NJ . Each rat was anesthetized with sodium pentobarbital and maintained in a supine position throughout the transport experiment. The femoral vein was cannulated with PE-10 Intramedic polyethylene tubing (Clay-Adams, Parsippany, NJ), and the bile duct was cannulated proximal to the pancreatic duct with 20 cm of PE-50 tubing, extending through a mid-line abdominal incision. Animals were allowed to recover from the operation for -1 h . The mixture of radiolabeled proteins was injected through the femoral vein cannula, and bile samples were collected for regular intervals (usually 10 min) until the experiment was terminated at 4 h . The mean rate of bile production was 13 mg min' (range, 7-21 kg min'). Leupeptin inhibition experiments were performed in rats weighing 200-250 g by injecting 5 mg of the drug (Sigma Chemical Co .) in 300,ul PBS intravenously l h before the radioactive sample (39) . For calculations of the kinetics ofbiliary transport, the time after injection was taken from the midpoint of the collection interval of each bile sample. This was corrected for the time required to pass through the bile cannula, but not for the time required for the bile to flow from the hepatocyte canalicular face down the ductules to the cannular opening, which requires an interval on the order of a few minutes. Total radioactivity in blood was calculated assuming a blood volume of 20 ml/300 g body wt. For experiments in which clearance from the circulation was also monitored, PE-10 tubing was brought into the abdomen through a small incision medial to the thigh, and the left common iliac artery was cannulated just below the aortic bifurcation . Just before the sample was injected into the femoral vein, 25 U of heparin (Sigma Chemical Co.) in 300 ul of 0 .89% saline were administered through the aortic cannula. During the course of the experiment, up to 10 blood samples of -75 pl were withdrawn (each sample was collected within 15 s) . 50 kl was measured accurately from each aliquot, and combined with 50 gal of heparinized saline . For all proteins except ASG, clearance halftimes were calculated as the time required to reach half the level of activity observed in the blood sample taken 1 min after injection . For ASG, first clearance half-times were extrapolated from the first sample taken at 1 min, using the injected activity to calculate initial concentration . For clearance halftime calculation only, an allowance was made for a nonclearable IgA fraction (ICI-IgA : 13 ± 3% (mean ± SD) ; BH-IgA: 23 ± 3%), based on the fraction of injected activity in the circulation at the end of the experiment . This fraction was still precipitable in 15% trichloroacetic acid . In inhibition studies, 100 mg of protein (75 mg/ml PBS) was infused through the femoral vein cannula during the 10 min preceding injection of the radioactive sample, which was then followed by continuous infusion of a further 50-75 mg over the next 1 .5 h . V,a. (the peak rate of transport of radiolabel to bile) was calculated on the basis of 10-min collection intervals, and the rate at radioactivity per gram bile . Standard errors and p values shown for % Vpmk at 150 min in Fig. 7 were calculated on the basis of the reciprocal logarithm of the relative rate. In all cases where the observed difference between inhibition experiments and autologous controls was significant by Student's Mest, the difference was also significant by Wilcoxon's test at least at the p < 0.05 level . In Vitro Binding Experiments: Human secretory component (SC), prepared by affinity chromatography of colostral whey on IgM-Sepharose (40), was coupled directly to Sepharose 4B activated with BrCN (41) . Trace radiolabeled IgA in 200 t41 of PBS containing 4 mg of bovine albumin per ml was incubated overnight with 50 jul of SC-Sepharose at room temperature before being washed and counted. Analysis of Bile and Blood Samples: Radioactivity in bile and plasma was counted and then analysed by gel filtration, typically within 48 h of the original experiment, on Ultrogel ACA-44 (purchased from LKB, Bromma, Sweden ; see Fig. 2), or on Sephadex G-200 or Sepharose 6B (1 .6 x 90 cm at 12 ml h', PBS, 4°C; sample size <1 .0 ml, fraction size 2.0 ml) . Radiolabeled protein in plasma to be used for reinjection was recovered by making the sample 40% saturated in (NH4)2SO4 and dissolving and dialysing the collected precipitate into PBS . Radiolabeled protein in bile to be used for reinjection or analysis by gel electrophoresis was recovered by passing the sample over an Ultrogel ACA-44 column equilibrated in 0 .01% ammonium acetate, pooling the included peak of radioactivity, and freeze-drying . 1-10 ng of radiolabeled asialo-orosomucoid (ASOr) was recovered per pooled bile sample . Slab gel electrophoresis in SDS was carried out by the method of Laemmli (42), and autoradiograms were generated using Kodak X-omatic enhancing screens. Molecular weight standards were generously provided by R. Allore of the University of Toronto . Uptake and Processing of IgA and ASG Asialo-orosomucoid (ASOr) and asialofetuin (ASFet) were radioiodinated either directly on tyrosine residues using iodine monochloride (1C1) (37), or by conjugating primary amino groups (mainly E-amino groups of lysine residues) with 3-(4hydroxy 5-["'I]iodophenyl) propionate by using its N-hydroxysuccinimide ester, the BH reagent (38). ICI-or BH-ASG was injected with ICI-IgA into rats intravenously, and the kinetics of clearance from blood and transport to bile were monitored. All preparations of IgA and ASG were cleared rapidly from the circulation (Fig. 1, Table I). 67% of injected ICI-IgA was transported to bile within 150 min of injection . This corresponded (within experimental confidence limits) to the fraction of the IgA preparations able to bind to SC-Sepharose (data not shown). The amount of radioactivity originally associated with ICI-ASG detected in bile after intravenous injection represented only 2-6% of the injected amount, and peaked in bile -10 min before IgA (Fig. 1, C and E). In contrast, -50% of the radioactivity originally associated with BH-ASG was transported to bile, and reached a maximum transport rate -7 min later than IgA (Fig. 1, D and F). Fractionation of Radioactivity Recovered in Bile Radiolabel from BH-ASG evidently was being processed differently than radiolabel from ICFASG. It was therefore important to investigate whether this reflected differential processing of radioactive catabolites after ASG was degraded, or whether the processing of ASG was itself affected by the method of labeling. To address this issue, we analyzed bile samples from transport experiments by gel filtration chrotime after injection (min) FIGURE 1 Clearance from blood and transport to bile of radioactivity from labeled proteins . A and B show the disappearance of radioactivity from blood for several representative experiments in which two aliquots of a single protein were alternatively labeled with ["s 11BH reagent or 1 ' 1 1CI and injected together . C-F show the results of individual transport experiments, in which IgA and asialoglycoprotein were labeled with alternative isotopes and injected together : C and E show the kinetics of appearance of radioactivity in bile when ICI-IgA was injected with ICI-ASFet (C) or ICI-ASOr (E) ; in D and F, the respective asialoglycoproteins were labeled with the BH reagent . % injected dose transported to bile/min is plotted against the elapsed time at the midpoint of the collection interval for each bile sample, which has been corrected for the time required for the sample to pass through the bile cannula . (Fig. 2). The elution volume of ICI-IgA in bile indicated that it had been transported >98% intact . However, radioactivity recovered from BH-ASG eluted in two positions: a small fraction appeared at an elution volume corresponding to that of the injected material, and the bulk of the radioactivity eluted near the void volume . In fact, the radioactivity still bound to ASG in bile represented about the same percentage of the injected dose regardless of which method had Clearance half-Number The time of peak transport rate of radiolabel to bile is calculated for the experiments shown in Fig . 3 . For each experiment, this time was taken as the midpoint of the 10-min interval when the transport rate (in % injected radiolabel/g bile) was highest . First half-time of clearance of radiolabeled protein from blood was measured via a cannula of the iliac artery in the number of experiments shown at right . Uncertainty shown is mean ± SE when n > 2, or mean ± range where n = 2 . . Biliary transport of total radioactivity from labeled proteins . The labeling reagent used was ICI (hatched bars) or the BH reagent (speckled bars) . Top : accumulated radioactivity in bile at 150 min after injection (mean ± SE, or mean ± range where n = 2) . Samples were analyzed by gel filtration as in Fig . 2 for the fraction of radioactivity attached to intact protein ; this is indicated within each bar by solid shading . Wa fiiwl The remaining area corresponds ASOr HSA to radioactivity that eluted after 70% of the column volume, representing low molecular weight catabolites . The scale of percentage transport to bile has been expanded in the 0-5% range to display the amount of protein-associated radioactivity transported to bile from labeled ASG and control proteins . Bottom: radiolabel remaining in blood at the end of the experiments (total radioactivity ; the subfraction that is protein-bound is not indicated) . IgA, rat dimeric IgA from plasmacytoma line IR699 ; HSA, human serum albumin . been used for radiolabeling (Fig. 3). This suggested that the protein was being processed identically in both cases, but that subsequent to lysosomal degradation of ASG labeled using the BH reagent, the radioactive tag (or a derivative) was being excreted through the biliary tract. Elution volume (fraction of column volume) To verify that the biliary appearance of low molecular weight radioactive material from BH-ASG was reflective of an event subsequent to receptor-mediated endocytosis, and not due to spontaneous dissociation ofthe label in blood, we then used the BH reagent to label a number of proteins that are known not to be removed from the circulation . In the subsequent transport experiments (Fig. 3), injection oflabeled IgG, native fetuin, and human albumin resulted in <6% of the radioactivity appearing in bile; as expected, most of the injected radioactivity was recovered in blood. When analyzed, the activity remaining in blood was found to be at least 95% protein bound (data not shown) . We then wanted to show that hepatocellular endocytosis of BH-labeled protein did not by itself result in release of most ofthe injected radioactivity into bile as low molecular weight material. Thus, in the next experiment we tested the appearance in bile of radiolabel from BH-IgA, since IgA is endocytosed and transported without degradation. Radiolabel recovered from bile from these experiments was found by gel filtration chromatography to be >90% protein bound, as predicted (Fig. 2, lower graph) . However, it was necessary to explain why BH-IgA was cleared from blood and transported to bile slightly less well than ICI-IgA (Fig. 3). Previous experience in our laboratory has indicated that the binding of polymeric immunoglobulins to SC (the putative transport receptor binding region) is acutely sensitive to perturbations in immunoglobulin conformation, and we suspected that BH labeling was functionally inactivating a small proportion of the IgA molecules that otherwise would be able to bind to the receptor. To test this possibility, identical samples of IgA were concurrently labeled using [`I]BH reagent or "ICI. The fraction of these preparations that could bind to human-SC Sepharose was 53% and 73%, respectively. The preparations were then injected together into rats, and radioactivity still in blood at 4 h was recovered and reinjected into a new animal. The reinjected BH-IgA and ICI-IgA were not cleared from the circulation of the second animal, and only -6% of either isotope was transported to bile. This demonstrated that IgA remaining in the circulation in the initial experiment was unable to bind to receptors in the liver, and thus represented functionally inactive protein. The small percentage difference in the clearance from blood and subsequent biliary transport of BH-IgA and ICI-IgA is therefore an artifact of the labeling technique; once bound by SC on the hepatocyte, either preparation is transported quantitatively to bile. We next turned our attention to the kinetics ofappearance of the small fraction of intact ASG in bile. ["SI]BH-ASG and "'ICI-ASG were injected together into the same animal, and timed aliquots ofbile were analyzed individually by gel filtration (Fig. 4). This revealed that the protein-bound radioactivity transported was the same proportion of the injected dose for both preparations, demonstrating that the transport of intact ASG to bile was independent of the method oflabeling. The total radioactivity from ICI-ASG peaked at the same time as protein-bound radioactivity from either preparation, and established ICI-ASG as a useful direct probe for kinetic studies on the transport of intact protein to bile. For BH-ASG, early bile samples were relatively enriched in protein-bound radioactivity, but low molecular weight catabolites peaked later, reflecting the protein processed by the degradative pathway . It was therefore clear that BH-labeled protein behaved like ICI-labeled protein up to the point where ligands transported to bile intact are separated from ligands destined for lyso- Fig. 2 for the proportion of radiolabel associated with intact protein (radioactivity eluting at <70% of the column volume) or present as low molecular weight catabolites (eluting at >70% of the column volume) . (p) Total transported radioactivity from BH-ASOr ; (0) total transported radioactivity from ICI-ASOr; (X) protein-associated radioactivity from BH-ASOr or ICI-ASOr (superimposable) . somes. To gain more direct evidence that biliary release of low molecular weight radioactivity from BH-ASG reflected differential handling of radioactive catabolites subsequent to lysosomal degradation ofthe protein, we made use of leupeptin, a thiol protease inhibitor that affects the activity of cathepsins (43), thus reducing lysosomal degradation of ASG (39,44). As shown in Fig. 5, prior administration ofleupeptin resulted in the accumulation of radiolabel from injected ASG in the liver, and a decrease in the release of radioactive catabolites from BH-ASG into bile. Thus, release of these catabolites requires lysosomal degradation of the protein. In contrast, there was only a slight increase in the total biliary transport ofintact IgA and ASG (not statistically significant), showing that direct transport of proteins to bile is quantitatively unaffected by the modulation of lysosomal function . Analysis of Labeled Protein Recovered from Bile To characterize the material that was transported to bile, we recovered the protein-bound radioactivity by gel filtration on Ultrogel ACA-44 and then analyzed it by SDS PAGE (Fig. 6). Prolonged exposure of the autoradiograms generated no evidence for proteolytic intermediates large enough to be included in the column and fixed in the gel. Calibration of the Ultrogel columns showed that insulin was just barely excluded from the peak area pooled for SDS gel analysis; this established a maximum size for proteolytic fragments of IgA or ASG that could be present in bile. When ASOr transported to bile was passed over a 1 .6 x 90 cm Sephadex G-200 column, its elution volume was identical (within 1 % of the column volume) to a simultaneously run ASOr standard. IgA and ASOr found in bile therefore showed no evidence of degradation . To demonstrate that the ASG transferred to bile was not a particular subfraction of the preparation predestined for biliary transport, we recovered intact ICI-ASOr transported to SCHIFF bile from one experiment to use it in a subsequent study: If the ASOr recovered from bile was indeed a special subfraction of the original preparation, then it should transport almost completely to bile when reinjected into a second animal . When this experiment was performed, the reinjected ASOr was cleared from the circulation of the second animal, but again only a small fraction (4.8%) of the dose was transported to bile intact . IgA and ASG Uptake and Processing Are Not Cross-inhabitable To determine whether IgA and ASG receptors are sufficiently close on the plasma membrane of the hepatocyte for cross-modulation of endocytosis to occur, we performed a series of inhibition experiments . Labeled protein was injected intravenously and bile was monitored for radioactivity, using the following probes : ICI-IgA for the IgA transport pathway ; 131-1-labelled glycoprotein for the lysosomal degradation pathway; and ICI-labeled glycoprotein for the minor (direct biliary) pathway of ASG transport. 10-15 min before injection of labeled material, 100 mg of one of several potential competing proteins was infused per 300 gram body weight . This dose was selected to ensure the presence of a large excess of inhibitor, so that even low-affinity and less accessible receptors would be as saturated as possible. 100 mg in 10 ml of plasma is equivalent to -2 x 10 -°M for ASFet and 3 x 10-5 M for IgA . To maintain a high concentration, infusion of the competitor was continued at -40 mg h-' as the experiment progressed . In experiments where arterial blood was monitored for uptake of labeled proteins, neither IgA nor ASG showed significant increase in clearance half-time when the animals were preinfused with the alternative ligand, when compared with control infusions with bovine albumin . This indicated that these two proteins did not compete at the level of receptor binding . We then investigated whether there was any crossinhibition after the molecules had bound at the cell surface . Data from bile transport experiments were analyzed in terms of both the cumulative percent of injected dose transferred to bile and the shape of the transport profile (Fig. 7): When a trace dose of radiolabeled IgA or ASG is processed by the liver, the rate of transport to bile peaks rapidly and recovers to a few percent of the peak value within a few hours of injection . A competing protein that partially inhibited processing could cause a decrease in the maximum transport rate (V, k) and delay this recovery, without ultimately reducing the size of the fraction transported . The recovery from peak transport rate at a predefined timepoint can therefore be used as a sensitive kinetic indicator of inhibition . Results of this analysis (Fig. 7, right) show that prior infusion of unlabeled ASFet was able to inhibit the processing of Ligand inhibition experiments. Left: biliary transport of radiolabel from a trace dose of labeled ASFet, compared with transport of ASFet in the presence of excess competing ligand and transport of the nondesialylated molecule (Fet) . Radioactivity from ICI-labeled protein reflects direct transport ; activity from BH-labeled protein principally reflects the material processed by the degradative pathway . The vertical bars illustrate the use of recovery from peak transport rate (Vpeak) as a kinetic indicator of specific inhibition . Right : cumulative radiolabeled in bile, and the relative recovery from Vpeak at 150 min, calculated for inhibition studies conducted on the three hepatocellular transport pathways : (1) The biliary transport of IgA (using ICI-IgA as probe, hatched bars) ; (2) the lysosomal degradation pathway (using BH-labeled glycoprotein, speckled bars) ; and (3) the biliary transport of intact ASG (using ICI-labeled glycoprotein, tinted bars) . Control experiments are compared with experiments conducted after infusion of 100 mg of unlabeled protein . Bile from experiments on the third pathway (tinted bars) was further analyzed by gel filtration chromatography for protein-bound radioactivity ; this fraction is shown within the bars of transported radiolabel by solid shading. 1C1-IgA (ligand), ICI-labeled rat dimeric IgA from the plasmacytoma line IR699 ; IgG and IgA (inhibitors), human monoclonal IgG and polymeric IgA; HSA, human serum albumin ; p, probability of equal means when compared with control experiments with the same ligand, using the one-sided Student's t-test (see Materials and Methods). Where p values are not shown, comparisons with autologous control experiments were not significant at the p < 0.05 level by either the one-sided or two-sided test . ASFet and ASOr, whether these proteins were labeled to reflect lysosomal degradation of ASG (speckled bars) or the transport of ASG intact to bile (tinted bars). This demonstrates that both pathways are mediated by a receptor specific for asialoglycoprotein . Inhibition of ASG processing by excess unlabeled ASFet was accompanied by a 21-to 64-fold increase in the circulating half-time of the labeled protein . Human polymeric IgA was able to inhibit the biliary transport of rat IgA dimer (hatched bars). In marked contrast, there was no quantitative or kinetic evidence for cross-inhibition between IgA and ASG processing, in terms of the degradation of ASG, or the transport of either protein to bile. The Bolton and Hunter Reagent Is a Probe for Lysosomal Processing All of the most frequently used methods of protein radioiodination, such as iodine monochloride, chloramine-T, and lactoperoxidase, involve oxidation of iodide and direct substitution onto tyrosine residues . Labadie, Chapman, and Aronson (45) have demonstrated that lysosomal degradation of ASG in the hepatocyte leads to deiodination ofiodotyrosine and subsequent return of the radioisotope to blood as iodide . The BH reagent contains a phenol group presubstituted with iodine, and conjugates to lysine residues by nucleophilic substitution . It was developed as a more gentle and versatile labeling method but results in the modification of the protein structure with a larger substituent. We have shown in this paper that in contrast to directly iodinated material, the lysosomal degradation of BH-ASG results in the release of half of the injected radioisotope into bile, mostly as material of <6,000 mol wt . The exact chemical nature of this catabolite is presently under investigation . It is clear that BH labeling does not in itself affect the early events in protein processing, and that regardless of the labeling method employed most ASG is ultimately degraded in lysosomes: (a) BH-labeled IgA and ASG are cleared from blood at the same rate as protein labeled on tyrosine residues ; (b) SCHIFF uptake and biliary release of radiolabel from BH-ASG can be inhibited by excess unlabeled ASG ; (c) IgA and a small proportion of injected ASG are transported to bile intact, regardless of the labeling method used; and (d) leupeptin, a lysosomal enzyme inhibitor, causes accumulation of the radiolabel from ASG in the liver, and (for BH-ASG) inhibits the release of radioactive catabolites into bile. The ability of the liver to recover iodide from iodotyrosine but not BHlysine indicates a high degree of specificity for the microsomal enzymes described by Labadie et al. (45). Data that we will publish elsewhere show that hepatic uptake of BH-hemoglobin also results in the release of radioactive catabolites into bile: hence the release of catabolites apparently requires delivery to lysosomes but otherwise is not restricted to mediation by any particular receptor . Thus, the BH reagent provides an experimental probe for studying uptake and lysosomal processing of proteins by the hepatocyte. The Major Metabolic Pathways of IgA and ASG Diverge Early The experiments described in this paper were designed to examine the interdependence of IgA and ASG processing, in terms of both the receptors and the intracellular organelles involved in ligand sorting . Separate receptors have previously been described for the binding of these proteins to isolated hepatocytes, but recent in vitro binding data have been interpreted by Stockert et al. (46) to imply that the receptor for ASG participates in the uptake of IgA for biliary transport . The work described here is the first to demonstrate that when the ASG receptor is saturated there is no quantitative or kinetic effect on the transport of IgA to bile. The role of a separate receptor for IgA transport is therefore now conclusively established . 86 FIGURE 8 Models for the partitioning of IgA and ASG. Shown are the transport pathway of IgA from blood to bile, and the major ASG pathway which leads to ligand degradation . (1) Common transport through lysosomes. ASOr is specifically degraded ; IgA survives digestion and continues on to the canaliculus. We know from other work that hepatocytes near the portal triad can participate in metabolism of both IgA and ASG. Individual cells must therefore sort the ligands according to one of three models (Fig. 8). First, both proteins may be delivered to lysosomes through a common route, but IgA may survive digestion and be subsequently transported to bile (47). Second, IgA and ASG may be endocytosed together and then sorted in a prelysosomal compartment: Material entering the degradative pathway is delivered shortly after endocytosis to intermediate vesicles of 200-500 nm (48)(49)(50)(51)(52); intermediate vesicles may also participate in the IgA pathway (53), and thus IgA-ASG sorting could occur at this stage . Third, IgA and ASG may be endocytosed from the plasma membrane directly into separate compartments . The data presented in this paper eliminate the possibility that both proteins are delivered to lysosomes before sorting (Fig. 8, model 1) : essentially all transportable IgA can be recovered in bile, with no detectable proteolytic intermediates . IgA is taken up more slowly but is excreted into bile more rapidly than most of the radiolabel from BH-ASG, presumably reflecting an additional time requirement for delivery to and from lysosomes of ASG and its catabolites . The most direct evidence against lysosomal involvement in the IgA pathway is provided by the failure of leupeptin to decrease the amount of IgA reaching bile. Leupeptin clearly affected lysosomal function in our experiments, since it prevented release of degradation products from BH-ASG and caused the accumulation of ASG in the liver. The ensuing conclusion is that lysosome-canalicular shuttling of intact protein does not occur . Therefore two alternatives remain : that IgA-ASG sorting occurs in a prelysosomal vesicle, or that the two ligands are endocytosed separately from the plasma membrane (Fig. 8, models 2 and 3). Possible precedents for these models include the following: Double-label studies using a2-macroglobulin, insulin, epidermal growth factor, iodothyronine, or low density lipoprotein have shown that certain ligands cluster together in common endocytic pits and can subsequently share intracellular vesicles (54)(55)(56) . Intracellular sorting of endocytosed protein precedes return of integral membrane components back to the plasma membrane of the fibroblast (57). Receptor-bound IgG in transit from the intestinal lumen to blood, and other proteins endocytosed nonspecifically for lysosomal degradation, are taken up by intestinal epithelial cells into common vesicles and sorted intracellularly (58). On the other hand, some membrane antigens such as Thy-1 are excluded from coated pits (59). In placental epithelial cells, IgG designated for placental transport is internalized separately from IgG that is pinocytosed nonspecifically and degraded in lysosomes (60). The ASG receptor on liver cells can be induced to internalize without affecting transferrin or insulin receptors on the cell surface (61). Thus, there appear to be biological precedents for ligand sorting both before and shortly following endocytosis. The cross-inhibition studies described in this paper can assist in the evaluation of these two alternatives for the partitioning of IgA and ASG in the liver. If the two ligands are endocytosed together through common coated pits, it is reasonable to expect that the binding of one ligand would interfere with the processing of the other. This is because (a) IgA (320,000 mol wt) is larger than its receptor (SC, 90,000 mol wt) and probably overlaps it sterically, thereby limiting interactions with membrane proteins directly adjacent; (b) binding of ASG to its receptor induces receptor aggregation (62); (c) covalent association of aggregated receptors is a necessary requirement for ligand uptake by other cells (63), implying that ligand-receptor aggregates undergoing endocytosis through a single pit are tightly packed; (d) endocytosis is slower and saturable at lower ligand concentration than the binding of ASG to its receptor (64); and (e) on the basis of the high surface density ofSC (9), we estimate that potentially 5% of the sinusoidal surface area is covered with IgA at saturation: this could be sufficient to fill all coated pits participating in the endocytosis of ASG (65). Thus, if IgA and ASG were endocytosed into common vesicles, then saturation of SC by IgA could limit ASG-induced aggregation of its receptors or binding to receptors already clustered nearby; conversely, saturation of ASG receptors and the formation of ligand-receptor aggregates could limit binding of IgA to neighboring SC molecules or decrease the accessibility of SC to endocytic pits. Cross-interference of endocytosis would be measured as an increase in transport time to bile; inhibition of binding would be reflected both in transport time and in the rate of disappearance from blood. Analysis of data from the inhibition studies revealed no evidence for cross-inhibition ofeither uptake or processing of IgA and ASG . Based on the arguments presented above, this suggests that the two ligands are endocytosed separately at the cell surface . The results are conceivably consistent with endocytosis of both ligands into common vesicles, but this is possible only providing the membrane receptor density or diffusion (not vesicle formation) is rate-limiting, and providing the fraction ofIgA and ASG receptors clustered with each other on the membrane during endocytosis (including in coated pits) is small. In addition, the active steps in the intracellular processing of ASG have a lower capacity than the endocytic step (64). Since the biliary transport of radiola-bel from ICI-IgA or BH-ASG is not kinetically cross-inhibitable, the handling ofthese two ligands after endocytosis must remain independent throughout intracellular processing. In summary, this study demonstrates that proteins entering the bile transport and degradation pathways are recognized by separate receptors and sorted before reaching lysosomes; the data are most easily fitted assuming that the pathways share no intracellular compartments . The Intact Transport of ASG from Blood to Bile Is Receptor-mediated We have shown here that up to 4% of injected ASG is transported from blood to bile intact . Other workers (66)(67)(68) have described the non-receptor-mediated transfer or leakage of proteins including ASG into bile, with molecular weightdependent and molecular weight-independent components . Nonspecific pinocytosis by hepatocytes accounts for at least part ofthis transfer, since injected horseradish peroxidase (for which the hepatocyte has no known receptor) can be detected in intracellular vesicles at about the rate expected from the known rate of fluid uptake (69,70). For many of these proteins, clearance from the circulation is slow, and the ligand has up to several hours to reach bile via the nonspecific route. However, ASG has a circulating half-time of <1 min, and is transported to bile with a kinetic profile reflecting its rapid clearance. We have presented data in this paper demonstrating that intact ASG reaches bile through a receptor-mediated pathway specific for desialylated glycoprotein. The key observation in reaching this conclusion is that persistence of glycoprotein in the circulation is not accompanied by an increase in the fraction transferred to bile. We achieved these conditions by injecting native fetuin, or by injecting desialylated protein in the presence of excess ASFet. If receptor recognition was not required for biliary transport of ASG, then the increased time that the glycoprotein was available in the circulation should have resulted in a concomitant increase in size of the fraction reaching bile. In fact, just the opposite occurred: the size of the transported fraction decreased significantly, indicating that biliary transport of intact ASG following injection of trace doses does indeed require receptor recognition. A precedent for receptor-mediated biliary transport of minor fractions of endocytosed protein has already been established : injected epidermal growth factor has been recovered in bile covalently attached to its receptor (71). We believe that a receptor-mediated minor pathway may account for the appearance in bile of trace amounts of other proteins primarily endocytosed for degradation (72)(73)(74)(75)(76)(77) . In discussing the mechanism for biliary transport of intact ASG, it is again necessary to consider the possibility that intact protein in bile represents material that has been translocated through hepatocyte lysosomes but has survived degradation (Fig. 9, model 2). However, because of the absence of proteolytic intermediates of ASG in bile, because intact ASG appears in bile earlier than catabolites from BH-ASG, and because leupeptin does not markedly alter the proportion ofinjected ASG that pursues the intact transport pathway, we conclude that ASG in bile has bypassed lysosomes . Two alternatives remain that do not invoke a special receptor or cell type for the biliary transport of intact ASG (Fig. 9, models 3 and 4): During the sorting of proteins that usually occurs before or just after endocytosis by hepatocytes, ASG FIGURE 9 Models for transport of ASG intact to bile . Possible mechanisms for escape of ASG to bile (dotted arrows) are superimposed on a diagram of the principal specific transport pathways : (1) Nonspecific pinocytosis or leakage through tight junctions. (2) Specific transport through lysosomes, with a small proportion of endocytosed ASG surviving digestion. (3) Misdirection of ASG vesicles after IgA-ASG sorting is complete . (4) Receptor-dependent missorting of ligands: ASG is occasionally missorted into vesicles destined for the canaliculus; these vesicles may be (but are not necesarily) the same as those responsible for biliary transport of IgA. Models 3 and 4 are drawn assuming that ligand sorting occurs at the plasma membrane ; analogous models can be drawn if ligand sorting occurs intracellularly, in which case the missorting of ASG could occur before or after release from its receptor. Of the models shown, only 3 and 4 are consistent with the data presented. may occasionally be missorted into a different ligand-receptor pool, such as an IgA vesicle destined for the bile canaliculus . Alternatively, vesicles containing properly sorted ASG and normally destined for lysosomes may occasionally be misdirected and fuse with the bile canaliculus. Both of these models imply that the biliary transport of many proteins may have no essential physiological function, but occurs simply due to an inherent degree of error in the ligand sorting process. Occasional missorting may also result in the redirection of a fraction of internalized protein back to the surface from which it was originally endocytosed (78) . Considering the complexity of metabolic pathways in the hepatocyte, it is perhaps not surprising that the fidelity of ligand partitioning is not complete.
2014-10-01T00:00:00.000Z
1984-01-01T00:00:00.000
{ "year": 1984, "sha1": "b12cd9e829066aec6180ae6cb6385069299f4d96", "oa_license": "CCBYNCSA", "oa_url": "http://jcb.rupress.org/content/98/1/79.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b12cd9e829066aec6180ae6cb6385069299f4d96", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
209481388
pes2o/s2orc
v3-fos-license
Latitudinal gradients of haemosporidian parasites: Prevalence, diversity and drivers of infection in the Thorn-tailed Rayadito (Aphrastura spinicauda)☆ Latitudinal gradients are well-suited systems that may be helpful explaining distribution of haemosporidian parasites and host susceptibility. We studied the prevalence, diversity and drivers of haemosporidian parasites (Leucocytozoon, Plasmodium and Haemoproteus) along a latitudinal gradient (30°–56° S), that encompass the total distribution (~3,000 km) of the Thorn-tailed Rayadito (Aphrastura spinicauda) in the South American temperate forests from Chile. We analyzed 516 individuals from 18 localities between 2010 and 2017 and observed an overall prevalence of 28.3% for haemosporidian parasites. Leucocytozoon was the most prevalent parasite (25.8%). We recorded 19 distinct lineages (13 for Leucocytozoon, five for Plasmodium, and one for Haemoproteus). Differences in haemosporidian prevalence and diversity by genus and type of habitat were observed in the latitudinal gradient. Further, we support the existence of a latitudinal associate distribution of Leucocytozoids in South America, where prevalence and diversity increase toward higher latitudes. Distribution of Leucocytozoon was associated with sub-antarctic habitat (higher latitude) and explained by cold temperature and high precipitation. On the other hand, we lacked to find a latitudinal associate pattern for Plasmodium and Haemoproteus, however low prevalence and high diversity were recorded in areas considered as a hotspot of biodiversity in Central Chile. Our findings confirmed the importance of habitat and climatic variables explaining prevalence, diversity and distribution of haemosporidian parasites in a huge latitudinal gradient, belonging the distribution of the Thorn-tailed Rayadito in the world's southernmost forests ecosystems. Latitudinal distribution of haemosporidian parasites has been explored at different geographical scales around the world (Merino et al., 2008;Oakgrove et al., 2014;Clark, 2018;Doussang et al., 2019;Fecchio et al., 2019a). However, most studies have been focused on Plasmodium and Haemoproteus genus, while Leucocytozoon genus has been understudied. In South America Leucocytozoon was associated with Andean region (Merino et al., 2008;Harrigan et al., 2014;Matta et al., 2014;Lotta et al., 2015Lotta et al., , 2016 and was recently reported in lowlands of Amazonas (Fecchio et al., 2018). For South America Merino et al. (2008) have proposed the existence of a positive association of Leucocytozoon with latitude (not a classical latitudinal pattern), while Plasmodium and Haemoproteus support the classical latitudinal gradient in diversity (Merino et al., 2008;Fecchio et al., 2019a). However, in a global revision of Clark (2018), he did not find a latitudinal gradient for diversity of the three most common haemosporidian parasites, hence the existence of latitudinal-associate parasite distribution continue under debate. The South American temperate forests are distributed along a narrow and huge latitudinal gradient (~3,000 km; 30°-55°S), mostly in Chile and with a narrow strip in Argentina. These forests are interesting due to their strong biogeographical isolation from other forests regions. At the north, they limit with the Atacama Desert, the driest in the world; to the east the Andean Mountains lies, to the west we find the south Pacific Ocean and to the south the sub-antarctic areas (Armesto et al., 1996(Armesto et al., , 1998. In Chile, this biogeographical isolation makes these forests present contrasting habitats and climatic conditions (i.e. relicts of temperate forests immerse in semiarid matrix at central-north Chile, forests with Mediterranean climatic influences at central Chile, temperate rainy forests at central-south Chile, and Magellanic sub-antarctic forests at south Chile) (Armesto et al., 1996;López-Cortés and López, 2004). To our knowledge, haemosporidian parasites had never been explored in a huge latitudinal gradient with contrasting environments covering the total distribution for an only bird-species in a natural isolate biome. These studies help to elucidate the community composition of parasites and the changes in space and time (Bensch et al., 2007;Van Rooyen et al., 2013b). The aims of this study were to assess the prevalence, diversity and drivers of haemosporidian parasites (Leucocytozoon, Plasmodium and Haemoproteus) throughout the latitudinal distribution of a passerine species along the world's southernmost forests. Bird sampling and study sites Between 2010 and 2017, we collected blood samples of the Thorntailed Rayadito from 18 localities along a latitudinal gradient in the South American temperate forests from Chile (30°-56°S; Fig. 1). In Bosque Fray Jorge National Park, Manquehue, Chiloé, and Navarino Island birds were captured from their nesting boxes (with a manually triggered metal trap that sealed the entrance hole when adults entered to feed their 12-14 days old nestlings). In the rest of the localities, we captured the birds using mist-nets. All captured birds were ringed with numbered metal rings provided by the Servicio Agrícola y Ganadero de Chile (SAG). Birds were weighed, and tarsus length was measured. A blood sample was obtained from the brachial vein by punction using sterile needles. The volume of blood extracted never exceeded 1% of the bird body mass. Blood was collected in microhematocrit tubes and stored into FTA Classic Cards (Whatman®) for subsequent molecular analysis. Birds were immediately released at the place of capture. We grouped the sampling areas into four types of habitats (relictsemiarid, mediterranean, rainy, and sub-antarctic) according to biogeographical regionalization (Morrone, 2014), environmental and climatic conditions (Table 1; Fig. 1), and using the map of terrestrial ecoregions of the world . (i) Relicts of temperate forests immerse in a semi-arid matrix (hereafter relict-semiarid): located between 30°and 32°S ( Fig. 1; Table 1) and are composed by relicts of forests from Pleistocene period. Here, we found the northernmost population (Bosque Fray Jorge National Park, lowest latitude) of the Thorn-tailed Rayadito. These forests are composed mainly of Olivillo (Aextoxicon punctatum) and Petrillo (Myrceugenia correifolia), distributed in patches at the top of the coastal mountain range (Villagrán et al., 2004) where the fog from the ocean induces microclimatic conditions that allow the forest to exist in this semi-arid matrix (López-Cortés and López, 2004). This fog-induced microclimatic is reflected on temperature and precipitation, observing low variation into the forest, while the climate in the semi-arid matrix is mediterranean-arid, with dry hot summers and cold winters. (ii) Forests in the mediterranean climate (hereafter mediterranean): located between 33°and 35°S ( Fig. 1; Table 1) and are composed mainly by xeric forests of Peumo (Cryptocaria alba), Hualle (Nothofagus obliqua), Quillay (Quillaja saponaria) and Litre (Lithrea caustica), which are characteristic of the Mediterranean climate semi-arid region of central Chile, where precipitation occurs only during winter and temperature varies greatly during and between days (Rundel and Weisser, 1975). (iii) Temperate rainforests (hereafter rainy): located between 37°and 43°S ( Fig. 1; Table 1), are composed by the most important native forest in the evergreen temperate forest with highest biodiversity; the Valdivian and Nordpatagonian forests and with predominant species of Olivillo, Maqui (Aristotelia chilensis), Chilca (Fuchsia magellanica), Coigüe (Nothofagus betuloides). The climate is rainy temperate with high precipitation over all the year (Carmona et al., 2010). (iv) Sub-antarctic Magellanic forests (hereafter sub-antarctic): located between 53°and 56°S ( Fig. 1; Table 1), represent the southernmost distribution limits (highest latitude) of the Thorn-tailed Rayadito. The vegetation is characterized by deciduous Magallenic forest, composed mainly by Lenga (Nothofagus pumilio), Canelo (Drimys winteri), Ñirre (Nothofagus antarctica) and Coigüe. The climate is oceanic, with a low annual thermic fluctuation (Rozzi et al., 2007). Molecular sexing and screening for haemosporidian parasites Genomic DNA was extracted using the salting-out procedure (Aljanabi and Martinez, 1997). The sex of birds was determined by using a molecular method (Fridolfsson and Ellegren, 1999). Polymerase Chain Reaction (PCR) products were run in 1% agarose gels, stained with Syber Safe®, and visualized using the system for the documentation and analysis of fluorescently stained gels GBOX F3 (Syngene, MD, USA). Birds were sexed as females (heterogametic: WZ) and males (homogametic: ZZ). Screening for parasites were performed with a parasite genus-specific primers in a nested-PCR protocol that amplifies a fragment of 480 bp (excluding PCR primers) of the mitochondrial cytochrome b (cyt b) gene of haemosporidian parasites (Hellgren et al., 2004). Two positive controls for parasites and two negative controls (ddH2O) for each 48 samples were included, no contamination was detected. We screened each sampled at least twice, to avoid false negatives. Positive PCR products were purified and sequenced using the Macrogen sequencing service (Macrogen Inc., South Korea). Prevalence and genetic diversity We used the software Quantitative Parasitology v.3.0 (Rózsa et al., 2000;Reiczigel et al., 2019) to calculate unbiased prevalence and its 95% confidence intervals (CI) with Sterne's exact method (Reiczigel, 2003). Prevalence and CI take in account the sample size, avoiding the problems that normal theory faces with skewed distributions, especially for small sample sizes (Rózsa et al., 2000;Reiczigel et al., 2019). In order to avoid pseudo-replication, only individuals at the first-time capture, were considerate for statistical analyses. The effect of latitude on prevalence and diversity of haemosporidian parasites was assessed indirectly by using the type of habitat as a dependent variable in statistical analyses (because of the latitudinal distribution of type of habitat and to avoid pseudo-replication). We used a bivariate Pearson's Chi-square test to compare infection prevalence of haemosporidian parasites between genus, sex and type of habitat. Genetic diversity was assessed for each type of habitat, and for the total distribution of the Thorn-tailed Rayadito, using number of polymorphic sites (S), haplotype number (h), gene diversity (Hd), and nucleotide diversity (π) from mtDNA cytb for each haemosporidian parasite genus, using the software DNAsp v.5.10.1 (Rozas, 2009). Drivers of haemosporidian infection Generalizer Linear Mixed Models (GLMMs) were performed to study the influence of ecological (habitat, temperature and precipitation) and a host factor (sex) over the probability of infection by haemosporidian parasites. GLMMs combine the properties of two statistical frameworks, linear mixed models and generalizer lineal models providing a more flexible approach for analyzing non-normal data when random effects are present (Bolker et al., 2009). In order to avoid statistical problems, the data was explored following recommendations of Zuur et al. (2010). We used GLMMs fitted by maximum likelihood with binomial distribution, including pooled haemosporidian (Plasmodium, Haemoproteus and Leucocytozoon) and Leucocytozoon, which were analyzed separated as a dependent variable. Due the fact of low prevalence for Plasmodium and Haemoproteus, were not evaluate as dependent variable. Phylogenetic analysis Sequences were aligned and edited using the software Sequencher™ v.5.4.5 (GeneCodes Corporations, Ann Arbor, Michigan, USA). Polymorphic sites were evaluated using Clustal X2 (Larkin et al., 2007). Lineages were identified using the software DNAsp v.5.10.1 (Rozas, 2009). The lineages obtained were compared with parasite lineages recorded in MalAvi database (Bensch et al., 2009) and Genbank. Novel lineages were classified using a code for the host (A. spinicauda: Asp), the country (Chile: Ch), the genus (i.e., L for Leucocytozoon), and a number. Then were deposited in the MalAvi database and GenBank (accession numbers: MN083254-MN083268). The best substitution model (GTR + I + G) suitable for phylogenetic reconstruction was determined using the software JModeltest v.2.1.4 (Posada, 2008), selected with both Akaike Information Criteria (AIC) and Bayesian Information Criteria (BIC). Phylogenetic reconstruction was performed using the software MrBayes v.3.2.6 (Ronquist et al., 2012). We used a sequence of Plasmodium falciparum as outgroup to root the consensus phylogram, the 19 sequences (478 bp) belonging to the lineages found in this study were aligned with 32 lineages (409 bp) previously recorded for Andean region (Colombia, Perú, Ecuador, Chile). Two independent Markov Chain Monte Carlo (MCMC) simulations were run for 5 million generations with sampling every 200 generations to create a consensus tree, the convergence of the analysis was corroborated by observing the standard deviation split criterion (< 0.01). Phylogeny was visualized using the software FigTree v.1.4.3. (Rambaut, 2009). Additionally, we estimated genetic distances (gd) between lineages of Leucocytozoon found in this study and other lineages recorded in Andean region using a Kimura two-parameter model of substitution, implemented in MEGA v. X (Kumar et al., 2018). Diversity of haemosporidian parasites Genetic characterization identified a total of 19 distinct lineages of 478 pb covering all the latitudinal distribution of the Thorn-tailed Rayadito, 13 lineages belonging to Leucocytozoon, five for Plasmodium and one for Haemoproteus (Table 3). New lineages documented in this study include 12 for Leucocytozoon (AspChL1-AspChL12), two for Plasmodium (AspChP1, AspChP2), and one for Haemoproteus (AspChH1). One lineage of Leucocytozoon was previously recorded in Andes mountains of Perú (Galen and Witt, 2014), and lineages of Plasmodium were previously recorded in different countries around the world (Bensch et al., 2009). For example, we found the widely distributed GRW04 lineage (morphological species of Plasmodium relictum). The distribution of lineages in four types of habitats along the latitudinal gradient showed three Leucocytozoon lineages for relict-semiarid habitat (lower latitude); three Leucocytozoon, four Plasmodium, and one Haemoproteus lineages for mediterranean habitat (central latitude); six Leucocytozoon, two Plasmodium, and one Haemoproteus lineages for rainy habitat (south central latitude); and five Leucocytozoon lineages for sub-antarctic habitat ( Table 3). The two most common parasite lineages (AspChL1, AspChL4) were associated mainly with sub-antarctic habitat, but AspChL1 was found in all types of habitats and both were found at lower and higher latitudes (Table 2). Drivers of haemosporidian infection Results of GLMMs explaining haemosporidian infection are summarized in Table 4. The best predictors for both, pooled haemosporidian and Leucocytozoon were the HABITAT (sub-antarctic), MAXPREC, and COLDTEMP. However, results for pooled haemosporidian were influenced by the high prevalence of Leucocytozoon throughout the latitudinal distribution of the Thorn-tailed Rayadito. Phylogenetic analysis The Bayesian phylogenetic analysis of the haemosporidian parasites sequences found in this study showed these grouped in four mainly clades (Fig. 2). Leucocytozoon was grouped into two main clades, Plasmodium into one main clade, and Haemoproteus into one main clade (Fig. 2). The most frequent lineages for Leucocytozoon (AspChL1, AspChL4) were grouped in the same clade and were linked mainly to sub-Antarctic habitat (higher latitude), but AspChL1 was present in all types of habitat ( Fig. 2; Table 2), the other clade of Leucocytozoon was associated mainly with rainy habitat and less frequent to Mediterranean habitat ( Fig. 2; Table 2). While lineages of Plasmodium and Haemoproteus were frequent in Mediterranean and rainy habitats in Central Chile ( Fig. 2; Table 2). Genetic distances between lineages of Leucocytozoon found in this study and lineages previously recorded in Andean region have indicated firstly, that most of the lineages of A. spinicauda were closely linked between them, lineage AspChL1 was related (gd = 0.002) to AspChL3, lineage AspChL2 was related (gd = 0.035) to AspChL12, lineage AspChL4 was related (gd = 0.035) to AspChL11, lineage AspChL5 was related (gd = 0.002) to AspChL6. Secondly, three lineages (AspChL7, AspChL8, AspChL9) were closely related (gd = 0.012-0.009) to a previously lineage recorded in Colombia (Genbank No. KF699313). Finally, lineage AspChL10 was related (gd = 0.012) to AspChL4 and with a previously lineage recorded in Chile (Genbank No. EF153661), and the only one lineage in our study previously documented in Perú (Genbank No. KF767431) was related (gd = 0.002) with a previously lineage recorded in Chile (Genbank No. EF153657) and with a lineage of Colombia (Genbank No. KF717054) ( Table 5). Discussion In this study, we evaluated the prevalence, diversity, and the factors that influence the probability of infection by haemosporidian parasites (Leucocytozoon, Plasmodium and Haemoproteus) in the Thorn-tailed Rayadito along a latitudinal gradient that encompass all the distribution of the above-mentioned species. We observed an overall prevalence of 28.3% for haemosporidian parasites in the Thorn-tailed Rayadito along its latitudinal distribution of~3000 km (30°-56°S). Leucocytozoon was the most common haemosporidian parasite genus, with high prevalence (25.8%), belonging to 91% of all infections. We only detected 3.5% of prevalence for Plasmodium and Haemoproteus. High prevalence of Leucocytozoon (25.8%) in the Thorn-tailed Rayadito was similar to the 24% and 27% of Leucocytozoon prevalence in the blue and great tit, respectively in Europe (Jenkins and Owens, 2011). However, our results differ from the 15.4% of prevalence observed at community level in Chile (Merino et al., 2008) and 1.2% in Andean mountains of Colombia (Lotta et al., 2019). This may be explained by multiple biotic and abiotic factors involving in the transmission, distribution and diversity of haemosporidian parasites (Valkiūnas, 2005;Sehgal, 2015). For example, in our study, most birds were sampled during reproductive season which is related to the relapse stage of chronic haemosporidian parasite infection (Valkiūnas, 2005;Asghar et al., 2011). Additionally and according to other studies (McCurdy et al., 1998;Fecchio et al., 2015), no sex-biased haemosporidian prevalence was observed in the Thorn-tailed Rayadito, this could be in concordance with the similar reproductive costs and exposure to vectors for monogamous species in both mates (McCurdy et al., 1998;Fecchio et al., 2015). In South America, Leucocytozoon has been recorded previously in resident birds from highlands of Andes Mountains in Colombia, Peru, Chile, and Ecuador (Merino et al., 2008;Galen and Witt, 2014;Harrigan et al., 2014;Matta et al., 2014;Lotta et al., 2015Lotta et al., , 2016Martínez et al., 2016;this study), and lowlands from Chile (Merino et al., 2008;Rodrigues et al., 2019;this study), and recently in lowlands from Amazonas (Fecchio et al., 2018). In this study, we support the existence of a latitudinal associated distribution of Leucocytozoids in South America (Merino et al., 2008;Matta et al., 2014). As we expected, and similar to the observed by Merino et al. (2008) in Chile and Oakgrove et al. (2014) in Alaska, the higher Leucocytozoon prevalence was increased toward higher latitudes (Table 2). Additionally, genetic lineage richness was increased toward central and higher latitudes (Table 3), similar to the reported in higher latitudes from Alaska (Oakgrove et al., 2014). Contrary to the review of Clark (2018) at global scale who showed no effect of latitude on Leucocytozoon and other haemosporidian parasites diversity. The existence of specific local environmental and host conditions that drive diversity and distribution of haemosporidian parasites (Ellis et al., 2017;Fecchio et al., 2019b), and parasite life cycle and transmission (Bordes et al., 2010;Santiago-Alarcón et al., 2012) can explain these results. Distribution of Leucocytozoon in the latitudinal gradient may be attributable to several factors including competent vector-parasite-host Year and locality were introduced as random effects. *** < 0.0001; * < 0.05. E. Cuevas, et al. IJP: Parasites and Wildlife 11 (2020) 1-11 interactions (Valkiūnas, 2005) and abiotic environmental factors such as, temperature and precipitation. The natural geographical isolation of the South American temperate forests from another forests regions provides contrasting environments (relict-semiarid, mediterranean, rainy, and sub-antarctic, see methods for details). In consequence, Leucocytozoon showed contrasting patterns of prevalence in those types of habitat (Table 3), because forest type and variation in forest structure influence the probability to be infected (Renner et al., 2016). Our GLMMs results indicated that Leucocytozoon prevalence was higher in the sub-antarctic HABITAT (higher latitude), with cooler temperatures (COLDTEMP) and higher precipitations (MAXPREC) ( Table 4). It has been observed that habitat is an important predictor to Leucocytozoon (Oakgrove et al., 2014;Lutz et al., 2015;Sehgal, 2015;Lotta et al., 2016;Illera et al., 2017;Padilla et al., 2017). Also, temperature and precipitation have been described to be essential environmental drivers of Leucocytozoon prevalence (Oakgrove et al., 2014;Harrigan et al., 2014;Illera et al., 2017;Padilla et al., 2017). In this sense, our findings are not unexpected, since Leucocytozoon species are adapted to develop and transmit below 15°C (Valkiūnas, 2005). In addition, Leucocytozoon has been described as to complete their life cycle at higher latitudes and elevations in mountain regions (Haas et al., 2012;Van Rooyen et al., 2013a;Harrigan et al., 2014;Matta et al., 2014;Illera et al., 2017). The mountains regions provide favorable habitat for blackflies, the vectors for Leucocytozoon (Haas et al., 2012;Lotta et al., 2016). Vectors are habitat dependent (Santiago-Alarcón et al., 2012) and blackflies are recorded in all environments from different altitudes and latitudes (Coscarón and Arias, 2007). In the Andes Mountains of Colombia, it has proposed that transmission occurs at low temperatures (0-14°C) (Matta et al., 2014;Lotta et al., 2015Lotta et al., , 2016, which temperatures conditions are similar in higher latitudes from Chile. Localities in higher latitudes such as, Navarino island (sub-antarctic HABITAT) it is composed by several mountain streams that provides suitable conditions for parasite development and transmission (Merino et al., 2008), reflecting in our prevalence records in the Thorn-tailed Rayadito. In fact, the high overall prevalence in our study was underlain by Navarino island locality, the higher latitude (55°40′S) at which haemosporidian parasites have been recorded in the world, given the absence of positive samples after this latitude. Hence, prevalence and diversity may be driven by the presence of competent vectors, which needs to be taken into account in future studies. High genetic lineage richness was recorded for Leucocytozoon, we found 13 Leucocytozoon lineages and only one was recorded previously in the Peruvian Andes (Galen and Witt, 2014). Surprisingly, those lineages were different to previously lineages recorded in Chile (Merino et al., 2008;Martínez et al., 2016;Rodrigues et al., 2019), even they were different to three lineages previously found in the Thorn-tailed Rayadito (Merino et al., 2008). Those findings might indicate that genetic diversity of Leucocytozoon is higher for this species in Chile, despite we include localities covering all the latitudinal distribution. Phylogenic relationships using sequences previously reported in South America (Andean region), showed two mainly clades. Lineages recorded in this study were present in both clades, most of them with a close relationship among the other lineages found in the Thorn-tailed Rayadito ( Fig. 2; Table 5). However, some of the lineages were closely linked to lineages recorded in other passerine species from Andean regions of Colombia, Perú and Chile ( Fig. 2; Table 5). Our observations suggest that some lineages tend to be generalists, because are distributed across a wide range of host and locations along the Andean regions (Merino et al., 2008;Galen and Witt, 2014;Lotta et al., 2015Lotta et al., , 2016. Nonetheless, some lineages might be exclusively to Furnariidae family (Fig. 2), as was observed with some lineages in Turdidae family (Lotta et al., 2016). However, we cannot asseverate the existence of host-specificity, since limited evidence was described for Leucocytozoids below the host-order level (Valkiūnas, 2005;Forrester and Fig. 2. Bayesian phylogenetic reconstruction of 409 bp haemosporidian cytochrome b sequences from positive Thorn-tailed Rayadito samples and lineages found in passerine species for Andean region. Plasmodium falciparum was used as outgroup. Lineage names of sequences from GenBank accession numbers are given followed by the passerine bird species and the country in which were recorded. Lineages found in this study are highlighted in bold with figures that represent the type of habitat in which lineages were found. Posterior support values are shown for each node greater than 0.5. Table 5 Genetic distance table between cytochrome b lineages of Leucocytozoon shown in Fig. 2. Calculations were made using Kimura two-parameter model of substitutions. Lineages of Leucocytozoon obtained in this study are indicated in bold. Lineage of Plasmodium falciparum was used as outgroup, and it is indicated in italics. E. Cuevas, et al. IJP: Parasites andWildlife 11 (2020) 1-11 Greiner, 2008), and this needs to be explored in birds communities and using new approaches, beyond the classical molecular methods (Lotta et al., 2019). On the other hand, lineages of Leucocytozoon were mainly associated with rainy and sub-antarctic habitats (Table 3, Fig. 2), as was observed previously in Chile, where most lineages were present in the localities with similar environmental characteristics (Merino et al., 2008). This might be explained by geographic distributions of haemosporidian parasites are generally determined by populations of avian host and their abundance (Ellis et al., 2015(Ellis et al., , 2017. Additionally, in our study two lineages (AspChL1 and AspChL4) underlain overall prevalence of haemosporidian parasites in the Thorn-tailed Rayadito, which are mostly frequent in Navarino island locality (55°S), but present in almost all types of habitats with less frequency (Fig. 2; Table 2). This is consistent with other studies in which haemosporidian lineages with wider distribution had higher local prevalence (Szöllösi et al., 2011;Swanson et al., 2014). This pattern suggests a hostswitching (Ellis et al., 2015) along the latitudinal distribution of the Thorn-tailed Rayadito, probably by migratory birds (Durrant et al., 2006), as was suggested by Merino et al. (2008), who observed shared lineages between the white-crested Elaenia (Elaenia albiceps) a longdistance migrant bird and other native birds from Chile. For Plasmodium and Haemoproteus parasites, we lacked to find a latitudinal associated pattern, and infections were missing toward lower (30°-32°S) and higher (53°-56°S) latitudes, however low prevalence and high diversity was recorded in central latitudes (mediterranean and rainy habitats; 33°-44°S). This association was observed previously in Rufous-collared sparrows in Central Chile (Doussang et al., 2019). Contrary to the observed by Fecchio et al. (2019a), where higher diversity of Plasmodium and Parahaemoproteus parasites was observed from Patagonia to Amazonia, following the classical latitudinal gradient in diversity increased toward the Equator. Our findings may be due the fact of Central Chile is considered as hotspot of biodiversity (Myers et al., 2000). In consequence, higher haemosporidian lineage richness was associated to central Chile, because the higher number of hosts species increases haemosporidian parasite diversity through parasite lineage sharing and host shifting (Galen and Witt, 2014;Ricklefs et al., 2014;Clark, 2018). Furthermore, it has been observed that Plasmodium and Haemoproteus exhibits similar diversity to their avian hosts (Clark et al., 2014), and prevalence is positively related to abundance of hosts (Ellis et al., 2017). However, we lacked to find this pattern of prevalence in these high biodiversity areas from Chile. This might be related to a dilution effect of diseases by high diversity of hosts (Keesing et al., 2006) and the association of some lineages for a particular host species in areas where all the haemosporidian parasites genera co-occurred (Clark et al., 2014;Pulgarín-R et al., 2018). Further, previous records of Haemoproteus in Chile suggested an association of this parasite for passerine family Emberizidae (Merino et al., 2008). This may be related to the low prevalence of this parasite genus in the Thorn-tailed Rayadito. The lineages of Plasmodium and Haemoproteus recorded in this study are more generalist, because distribution was recorded previously in different hosts and geographical areas (Clark et al., 2014;Bensch et al., 2009). Additionally, we found the widely distributed lineage of Plasmodium relictum (GRW04), the parasite that contributed to the declined of avifauna in Hawaii . This lineage was recorded in Central Chile (Manquehue hill locality) and was previously reported in House sparrows at the North of Chile (Martínez et al., 2016) and recently in bird communities at Central and North of Chile by Doussang et al., (in preparation). Finally, differences in prevalence and diversity among haemosporidian genus and type of habitat in the Thorn-tailed Rayadito may be explained by their evolutionary history (Ricklefs et al., 2004(Ricklefs et al., , 2014Valkiūnas, 2005;Lutz et al., 2015). Because prevalence and diversity of haemosporidian parasites may be driven by the presence of competent vectors, more studies are needed to reveal the potential roles of local species of Culicidae, Ceratopogonidae, and Hippoboscidae families, as vectors. Also, more studies focused on temporal prevalence and diversity patterns in bird communities are necessary to fully understand the distribution and possible host-specificity of Leucocytozoon and other haemosporidian in South America. Declaration of competing interest The author declared that there is no conflict of interest.
2019-12-05T09:25:08.601Z
2019-12-03T00:00:00.000
{ "year": 2019, "sha1": "d30d0cb8e46e744772ffc57d32f629a5864030ec", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijppaw.2019.11.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "505f7816816c166a4a76c6db5714c519ab4bb254", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
265210518
pes2o/s2orc
v3-fos-license
Periodontal ligament and alveolar bone remodeling during long orthodontic tooth movement analyzed by a novel user-independent 3D-methodology The structural process of bone and periodontal ligament (PDL) remodeling during long-term orthodontic tooth movement (OTM) has not been satisfactorily described yet. Although the mechanism of bone changes in the directly affected alveolar bone has been deeply investigated, detailed knowledge about specific mechanism of PDL remodeling and its interaction with alveolar bone during OTM is missing. This work aims to provide an accurate and user-independent analysis of the alveolar bone and PDL remodeling following a prolonged OTM treatment in mice. Orthodontic forces were applied using a Ni–Ti coil-spring in a split-mouth mice model. After 5 weeks both sides of maxillae were scanned by high-resolution micro-CT. Following a precise tooth movement estimation, an extensive 3D analysis of the alveolar bone adjacent to the first molar were performed to estimate the morphological and compositional parameters. Additionally, changes of PDL were characterized by using a novel 3D model approach. Bone loss and thinning, higher connectivity as well as lower bone mineral density were found in both studied regions. Also, a non-uniformly widened PDL with increased thickness was observed. The extended and novel methodology in this study provides a comprehensive insight about the alveolar bone and PDL remodeling process after a long-duration OTM. Although the PDL plays a unique and dominant role in the regulation of bone remodeling during OTM [5][6][7][8] , there are no studies comparatively assessing the crosstalk between PDL and bone in long-time periodontal tissue remodeling.A detailed characterization of PDL regeneration and remodeling in its complexity in later phases is required to understand the complex biological mechanism of OTM. Based on our previous study presenting an approach to follow the complexity and dynamics of OTM over a long time with non-invasive in vivo monitoring 2 , the aim of this study was to describe the changes of alveolar bone and PDL structures in a long duration OTM mice model by an accurate and user-independent methodology. Micro-computer tomography (micro-CT) offers a non-destructive method of detailed anatomical assessment and is commonly used for bone remodeling assessment under OTM treatment in mice or rats 9,10 .So far, the methodologies used for bone and PDL microstructure evaluation in the literature vary in several aspects, which worsens the comparability between studies 11 .Still, the method of data analyses is crucial to the validity of the study.Various sizes and shapes of volumes of interest (VOI) for the morphometric analysis result in non-comparable, often user-dependent data.Typically, in a preclinical setup, bone remodeling is investigated in the alveolar socket of the OTM treated first molar (M1).Due to the intermittent connection and interaction between all bone areas and PDL in the orthodontically treated periodontium, this tissue may also be affected in the surrounding region of the treated tooth.Still, literature provides insufficient knowledge about the cellular and morphological response in such surrounding areas.Additionally, bone remodeling differences in dependence of the stress distribution in the orthodontically treated periodontium have been shown previously [12][13][14][15] .A prevalent compression force is known to lead to bone resorption while bone growth occurs predominantly on the tension side 4,12,14 .However, the exact definition of the location and processes of bone reformation in tension or compression side, respectively, remains insufficient 16,17 .We therefore plan to present an innovative insight about the various bone remodeling extent in diverse regions-alveolar socket of the treated M1 where a combination of both tension/compression regions is expected; alveolar bone between the first and second molar with presumably a tension region in dominance; and in periodontal ligament of M1 where both forces play their role. During all OTM phases, bone and PDL undergo structural remodeling, characterized by changes in their porosity, mineralization, size, form, and pore distribution.These properties are well described through several parameters estimated via micro-CT.An increase in porosity can be defined by decreasing bone volume to total volume ratio (BV/TV).In a process of bone loss or an initial phase of bone growth, the trabecular bone typically reforms to thinner bone structures and larger space between these trabeculae, well defined by trabecular thickness (Tr.Th) and separation (Tr.Sep), respectively 13,18 .A higher density of pores often leads to extra connections between trabeculae, increasing the intra trabecular connectivity 19 .Still, the outcomes variability found in literature indicates the importance of studied parameters, such as duration of OTM, applied mechanical force, and evaluated VOI position and size 16,18,20,21 .Reduced bone mineral density (BMD) and BV/TV combined with a later thinning of trabeculae were found in several studies 13,[20][21][22][23][24][25][26] while other studies detected no changes in these parameters 16,18 .Also, the variability of the studied OTM time-points was shown to result in modified bone and PDL characteristics, pointing out their dynamic and often non-linear nature of bone and PDL remodeling 16,[27][28][29] .For a proper and extensive understanding of bone and PDL remodeling and their possible interactions during OTM, it is of paramount importance that these oral processes would be carefully and extensively investigated in various regions of periodontium, and-particularly-at longer OTM durations. The compression or extension within the PDL has been mostly demonstrated as a change of PDL thickness in the direction of the orthodontic force 12,25,29 .Such conclusions are typically estimated from individually selected 2D transverse sections of the treated molar 25,29,30 .Technical difficulties in the exact determination of the OTM force direction in 3D data may result in erroneous PDL thickness estimation.Discrepancies in quantitatively calculated PDL thickness may also be affected by individual selection of studied section along the roots.Also, a non-uniform stress distribution inside the PDL volume during various OTM forces, which has been previously shown by finite elemental analyses 31 , indicates the outcome dependence on the studied PDL-region.Overall, only a few studies applied more complex 3D methods to study the PDL changes [32][33][34] .All these aspects point out the necessity to establish a precise and reproducible methodology for studying the PDL changes inside the complete PDL volume during orthodontic stimuli.To gain a profound insight about the periodontal ligament during orthodontic treatment, a novel 3D approach for the characterization of the complete PDL space and its changes was implemented in this work. Our study offers a unique characterization of alveolar bone remodeling during OTM in mice over a long time period of 5 weeks and provides a better insight into the later phases of alveolar bone and PDL remodeling.The macroscopic modifications of the PDL as a complete region were precisely studied as well.The focus of our study was the accurate and user-independent characterization of several periodontium areas around the orthodontically treated upper first molar tooth and on differences in the tissue remodeling processes affected by compression or tension forces in these regions.Finally, this study offers an important improvement step for the methodology applied for volumetric micro-CT studies of the periodontium during orthodontic therapy. Ethical statements The animal study protocol was approved by the competent authority and performed in compliance with the German Animal Protection Act (approval ID: 81-02.0420190.A190, committee of North Rhine Westphalia, Germany).All experimental methods were performed in accordance with ARRIVE guidelines.All experiments were carried out following relevant guidelines and regulations. Micro-CT scans Maxillae (n = 3) were scanned in Skyscan 1272 (Bruker MicroCT, Belgium) at 60 kV, 166 µA, using the 424 ms integration time resulting in the isometric voxel size of 3 µm.Data were reconstructed by NRecon software (Bruker MicroCT, Belgium) and evaluated for the microstructural and PDL parameters using CTan software (Bruker MicroCT, Belgium) in a transverse view.To obtain 3D images, CTvox (Bruker MicroCT, Belgium) was used for 3D rendering. 3D analyses The upper jaw area of the 1st (M1) to 3rd (M3) molar were studied.The changes of OTM (right) side with spring coil were compared to contralateral side as a control (CC) (Fig. 1A). Tooth movement estimation One control side (CC) scan was chosen as a reference for the 3D registration (DataViewer, Bruker MicroCT, Belgium) of the scans to ensure the same jaw orientation for all scans.Following, the rest two CCs were registered to this CC-reference according to the area of the second and the third molar (Fig. 1).Afterwards, all CC sides were geometrically flipped over a sagittal plane in transverse view to mimic the opposite (OTM) side using the geometrical transformation algorithm (CTan, Bruker MicroCT,Belgium).Each OTM side was then registered to its flipped CC (FCC) side according to the M2-M3 region.Afterwards, the M1 movement (translational and www.nature.com/scientificreports/rotational) was estimated by the additional 3D registration inside the M1 region from the initial and final position matrix of the 3D data (DataViewer, Bruker MicroCT,Belgium).(Fig. 1). Morphometric analysis To ensure the reproducibility of the morphological analysis through the automatic localization of the analyzed region (VOI), all data were initially registered in 3D (DataViewer, Bruker MicroCT, Belgium) to reach the most possible identical orientation.The OTM data were registered to their contralateral flipped control side (FCC) according to the 1st molar region for the analysis of alveolar bone between the mesiobuccal and distobuccal root of M1, presumably with the combination of tension and compression subregions (Fig. 1D).In that way, a single VOI could be chosen for all data while its reproducible and exact position was ensured.The registered data were then analyzed in CTan (Bruker MicroCT, Belgium) for bone mineral density (BMD) and microstructural parameters, inside the cylindrical volume of interest (VOI:M1), to obtain the following characteristics-bone volume/total volume (BV/TV); trabecular thickness (Tb.Th); trabecular separation (Tb.Sep); intratrabecular connectivity, defined as a number of independent connections within a complex bone structure (connectivity).BMD was normalized to the average BMD of control group as relative BMD.The VOI:M1 consists of a cylindrical cut with an area of 1.0 mm 2 and a height of 0.9 mm (Fig. 3A).The structural parameters were estimated after virtual removal of tooth and soft tissue (pulpa and PDL) from VOI:M1 using a combination of thresholding and seed-growing algorithm (ROI-Shrink/Fill out option); and an algorithm for closing of the broken pores on the PDL/cementum/dentin border.A constant threshold for delineation between hard and soft tissue or fluids for all the scans was chosen for more precise comparison.The same parameters (BV/TV; Tr.Th; Tr.Sep and connectivity) were estimated for the bone area between the 1st and the 2nd molar tooth-VOI:M1-2 for the study of the bone under prevalently tension force (Fig. 6A).For the definition of the VOI:M1-2, the data registered to the M2-M3 were considered, as shown in Fig. 1C.The cylindrically shaped volume of interest with the height of 0.63 mm and a circular cut area of 1.05 mm 2 included the alveolar bone between both molars and partially the bone around the roots of M1.A slightly smaller VOI was chosen for M1-2 to exclude the bone region under the roots. Periodontal ligament For the characterization of PDL, a cylindrical VOI similar to VOI-1 but with larger cross-section to cover the complete space of the 1st molar and the surrounding PDL was considered as the volume of interest. Statistical analysis To estimate the minimal sample size, power analysis was performed using G*Power software (G*Power 3.1.9.2, F.Faul, University Kiel, Germany).Since no OTM analyses after 5 weeks of treatment has been published yet, the power analyses with a significance level of 5% and power of 80% were done based on the results of BV/TV from Kako et al. 35 for only 3 weeks of OTM (50.7 ± 3.1 for control group; 30.5 ± 1.7 for OTM group) resulting in the requirement of minimal sample size of 2 only.Based on this requirement, n = 3 was used in this study. The statistical analysis was performed using GraphPad Prism (version 9.4.1)due to the small sample size.The inter-group comparisons were estimated based on independent unpaired two-tailed Student's t-tests (α = 5).All data are presented as mean ± standard deviation (SD). Long OTM results in a complex tooth movement in mice The process of 3D overlapping of a geometrically flipped CC and an OTM group before the tooth movement estimation and setting up the VOI for the morphological calculations ensured a precise comparison.The complete tooth movement after 5 weeks of treatment was found to be complex and individual.One sample showed prevalently translational movement (blue-colored images in Fig. 2B), the other two maxillae showed a strong rotational aspect additional to the translation.Also, the rotation direction rotation varied strongly (Fig. 2C).High variations in translational and rotational movements indicate a rather complex nature of the tooth movement during such long orthodontic treatment.(Fig. 2). Alveolar bone underwent significant morphological changes The VOI:M1 revealed highly significant changes in the BV/TV ratio.Also, relative BMD and connectivity resulted in significantly changed values after 5 weeks of OTM (Fig. 3B).The changes in the bone mineral density are less pronounced, but still follow the expected trend of bone mineral loss during the induced bone remodeling and are parallel with the BV/TV changes.Bone connectivity showed to be a sensitive parameter to explore bone morphometric changes, which, on the other side, led to strong deviations.Still, the higher porosity accompanied with a strong increase in connectivity indicates that after 5 weeks of OTM the appearance of new pores and widening of the existing neighboring ones were extended enough to cause a more complex trabecular structure.(Fig. 3). Bone loss during OTM was found to appear parallel to the thinning of alveolar bone inside the M1 root system (Fig. 4A).Visibly less trabeculae with the Tr.Th over 45 µm (red color-mapped regions) were found in the OTM sample.This alveolar bone consists of the trabeculae with the average thickness of (57 ± 11) µm.Contrarily to that, the control trabeculae resulted in the significantly higher Tr.Th of (111 ± 5) µm (Fig. 4C).The graph of the average Tb.Th volume distribution shows the shift from a relatively wide thickness distribution in control to the predominantly thinner sizes due to OTM treatment (Fig. 4B). Alongside the higher porosity and bone thinning after 5 weeks of OTM (Figs. 3 and 4), the expansion of the Tr.Sep in volume (Fig. 5B) reveals that some bone regions are replaced with pores.A higher density and larger dimensions of pores compared to the control can be seen too (Fig. 5A).The Tb.Sep volume distribution exhibits a slight shift of its center to the region with higher pore sizes.The overall volume was larger in the OTM group, which confirms the findings of decreased BV/TV shown in Fig. 3B. in the control.The smallest pores were defined by a diameter of 6 µm surpass while larger pores prevalence of 24-42 µm can be observed in the OTM group.These findings have been confirmed by the statistical outcome of significantly increased Tr.Sep within the OTM group (Fig. 5C). The thinning of the trabeculae is well visible on the analyte 3D images (Fig. 4A) and in good agreement with the increased density and size of the pores that replaced the missing trabeculae (Fig. 5A).Thicker trabecular structures with a thickness of over 150 µm practically disappeared from the VOI:M1 after 5 weeks of OTM (Fig. 4B) while most of the trabecula reduced its thickness to the values around 40 µm only.Also, the size distribution of Tr.Th became visibly narrower following OTM.In the region between M1 and M2, this thinning was less evident.The variability of the pore sizes remained the same while significantly more pores appeared in the VOI:M1 (Fig. 5B).At the same time, the 'center' of distribution was shifted to the higher Tr.Sep values.The distribution shown in Fig. 5B demonstrates the importance of such characterization towards these parameters.That is to say, even with a minor change in the size distribution of Tr.Sep and thus of the average Tr.Sep, firm morphological changes may take place.(Fig. 5). Similar structural changes found in both studied areas-M1 and between M1 and M2 A missing convention about the morphometric evaluation of bone remodeling under OTM treatment raises the question, which alveolar bone of OTM side is predominantly modified by the bone resorption and where the bone growth prevails.Additionally, it is unclear which region undergoes the strongest structural changes.Therefore, beside the typically studied M1 region, the alveolar bone between the 1st and the 2nd molar (VOI:M1-2) were investigated in our study. Based on the similarity showed by the trends in all evaluated parameters to the results from the VOI:M1 region (Fig. 6B), seemingly similar bone remodeling changes proceeded in both regions during the 5 weeks of OTM.Increased porosity, Tr.Sep and connectivity together with the decreased Tr.Th confirmed the bone resorption process.Still, based on the weaker statistical differences, these changes may be less pronounced in this volume of interest in comparison with VOI:M1 (Figs. 3, 4, 5 and 6). Periodontal ligament complex expansion after OTM Due to the inherent connection between bone and PDL inside of periodontium, the periodontal ligament was evaluated for possible remodeling too.PDL volume and thickness were increased in a similar manner following OTM treatment (Fig. 7C), while only the PDL thickness changes were found to be significant (p = 0.0007).This thickening of PDL, together with its deformation, can be observed in 3D rendering with a color-mapped depiction of PDL thickness (Fig. 7A).The volumetric distribution of PDL thickness (Fig. 7B) confirms the malformation by a strong distribution widening in all OTM samples in comparison to the relatively narrow distribution located around thinner PDL thickness values in control group (Fig. 7). Discussion Long-term orthodontic treatment in mice or rats over 3 or more weeks is scarce due to the challenging experimental setup 2,16 .We have successfully proceeded the longest studied OTM experiment in mice so far 2 and could profoundly investigate the bone and PDL changes after this extended mechanical stimulus, with clear alterations in tooth position, alveolar bone, and PDL morphology.The differences between the bone form in various OTM stages, found in previous works 16,27 , demonstrate that a thorough understanding of bone remodeling also in the later stages of orthodontic induced periodontal remodeling is highly relevant.This offers an important step for the investigations of interrelation between modifications in alveolar bone microstructure and periodontal apparatus.The strong variability in tooth translational movement and rotation as well as in their directions, found in this study, points out the simultaneous adaptation of force distribution after each small movement during the 5 weeks of OTM.Although the possible small discrepancies in the orthodontic force direction due to a difficult placing of the NiTi coil in mouse model may also contribute to the OTM discrepancy, these cannot fully explain the large differences in the tooth movement.We also point out the possible other factors, such as biological and habitual differences between the animals, i.e. that may influence the progression of OTM.The fact that the estimated rotational movements were found in both directions despite no external changes on NiTi coil setup, indicate that the center of rotation and the nature of movement adapt to the bone changes over the studied time frame and may thus be inconsistent.For these reasons, we propose that periodontal bone remodeling, especially during long-term orthodontic treatment, cannot be simply distinguished by the compression and tension part without an exact estimation of the tooth movement and rotation.Rather a complex remodeling takes place in a wider area around the tooth roots within the periodontal apparatus.A similar estimation may apply also in the case of shorter studies, as a certain degree of tooth rotation cannot be excluded in earlier stages.It should be noted that certain biological aspects may also contribute to the apparent OTM variance in this study, such as possible geometrical differences between both sides of maxillae. For the evaluation of bone changes following OTM treatment, several studies applied the division to the tension and pressure side method [13][14][15]20,36 . Still the tooth movement complexity during a NiTi-coil-induced OTM in small animal research results in high obstacles for the definition of tension and compression sides.Often, the bone resorption mechanism is prevalent on both sides, leading to higher porosity and lowered BMD and Tr.Th in both regions 13,20 .Some studies found the opposite remodeling outcome on both sides, leading to lower porosity and thicker or unchanged trabeculae on the tension side, while thinning of trabecular bone, loss of BV/TV and smaller Tr.Sep appeared in the compression region 14,15,36 .It was usually not examined whether the tooth movement was consistent or whether the constant force led to only translational movement in one direction or whether an additional complete root system rotation took place in the mechanical force application time frame.The OTM mechanism is known to be highly complex and a combination of movements (i.e., translation and rotation) may happen simultaneously and interchangeably 16,33 .Based on an in vivo study of OTM in male Wistar rats up to 31 days, Zong et al. 16 suggested that the tension and pressure side may not be placed on the opposite sides of the tooth root, but rather be adjacent and inter-connected due to the inconsistent OTM rate over the timeline of orthodontic treatment.For that reason, a simple establishing of such areas found in literature and solely based on the the NiTi-coil direction may fall into misleading results regarding various bone remodeling under the opposite mechanical stimuli. Most of the OTM studies in rodents focus on the alveolar bone in the region of the 1 st molar since this is typically the experimental treated molar.Because of the highest expected bone changes in the region between the mesiobuccal and distobuccal root of M1, often a cubic or variably shaped VOI is chosen in this area for the morphometry analysis 18,25,[35][36][37][38][39][40] .In a few studies, the VOI was set up in the region further from the roots 14 or directly around the roots 21 .Still, the exact position, size and form of VOI vary strongly in the literature.Since studies on differences between the bone remodeling process in various regions of alveolar bone and PDL are missing, there is an open question whether only the alveolar socket of M1 is the most affected bone and whether other processes follow in the surrounding regions.The relatively wide VOI might reduce the statistical power because alveolar bone not affected by the movement might be included too.Some studies suggest applying a thin VOI within the region where bone remodeling is expected to take place (of ~ 100 µm thickness) 21 .It is, however, questionable whether the significant bone changes are restricted only to such relatively small regions close to the stress initiation points or whether wider regions may also be affected, especially after a prolonged orthodontic treatment.Too small VOI may limit the accurate estimation of the parameters, such as Tr.Th and Tr.Sep, if the VOI dimension is comparable to these parameters 13 .On the other side, Zong et al. 16 evaluated the complete alveolar and basal bone around and under M1 and M2.Though, the inclusion of the rather stagnating basal bone into the VOI hinders the detection of smaller changes in the analyzed tissue. To accurately estimate the change within the alveolar and periodontal bone, we chose to analyze the complete alveolar bone region between the mesiobuccal and distobuccal root (VOI:M1) or between the distobuccal root of M1 and the mesiobuccal root of M2 (VOI:M1-2) and reaching from the furcation up to the tip of the shortest root of M1.In such a case, a maximal VOI is chosen where possible strong changes can be expected while excluding the basal or cortical bone.Also, our VOI dimensions were larger in comparison to the obtained Tr.Th and Tr.Sep.Most importantly, a precise 3D registration ensures an almost identical position of the studied VOI in-between the samples.This allows for a more precise estimation of the morphology parameters as well as for the BMD.To avoid the influence of the cortical bone, the outer boundaries of VOI were set close to the roots center in order to exclude such regions. The bone resorption as well as bone deposition processes are known to relate to an increased porosity and lower bone volume in the actively regrowing region 41 .Although no modifications in BV/TV after a short OTM (up to 3 weeks) could be detected in some studies in mice or rats 18,42,43 , the most of them showed similar reduction of BV/TV and thus an increase of bone porosity in the OTM group 13,21,[24][25][26]38,44 . In ou study, both regions of interest showed a significantly lowered BV/TV indicating a still regenerating phase of the alveolar bone.Due to lacking long-term OTM investigations in mice or rats in literature, this study proves in a novel way that previously observed bone remodeling also proceeds after a relatively long force application.Higher and often increased porosity leads to a stronger intratrabecular connectivity.We could see such an increase in both VOI regions while the connectivity increase was larger in VOI:M1.The higher porosity and connectivity of the OTM affected regions indicate a formation of woven bone along the collagenous PDL fibers in the widened PDL-bone attachment region 32,41 .This initial bone form is known to have an increased porosity comparing to natural alveolar bone and is therefore more susceptible to resorption if inflammation is initiated.In the later stages of bone remodeling, the woven bone will reshape to lamellar bone 45 . Alongside these findings, the mineral density seems to be mostly reduced after OTM too 13,23,38 , which was also confirmed in a study on orthodontic patients 22 .Interestingly, Wang et al. 27 observed a strong reduction of BMD after only 2 weeks of OTM followed by a turnover and its correction within the next 2 weeks of orthodontic treatment.This phenomenon may explain the findings from other studies in mice or rats, where BMD was not significantly different after 3 or more weeks of OTM 16,18,43 .The possible mineral density regeneration over the period of 5 weeks OTM also corroborates with our findings in both VOIs (i.e., VOI:M1 and VOI:M1-2), where BMD reduction was relatively small or non-significant.The fact that alveolar bone becomes more porous after OTM while mineral density remains relatively stable, indicates the concomitant bone formation and resorption caused by sterile inflammation 16 .That is an opposite regulation to the known phenomenon of pathological processes during osteoporosis 46 or periodontitis 47 where both parameters are reduced simultaneously.Thus, the sole BMD parameter is probably not sufficient to study bone changes under mechanical stimulus. In parallel with bone loss or growth, a thinning of trabeculae has been detected in many studies 16,18,20,43 .Such reduction of Tr.Th obtained in this study after 5 weeks of OTM in both VOIs was greater than found by Holland et al. 48after 3 weeks of OTM in wild-type mice.Interestingly, this reduction was stronger in the M1 region than in the VOI between M1 and M2.The lowering of Tr.Th is expected, assuming that both bone loss and bone regeneration are happening in VOI:M1, while VOI:M1-2 may prevalently cover a region of bone growth and follows similar trend in other parameters, such as BV/TV, BMD and connectivity.Lower Tr.Th has been found mainly in the compression regions where bone loss process dominates 14,36 .While Dorchin et al. 20 found trabeculae thickening in both tension and compression sides, a constant 36 , decreased 13 , or even increased 14 value of Tr.Th has been observed on the tension side. Together with the thinning of trabeculae, a simultaneous increase of Tr.Sep is in agreement with other studies 14,18,36 and was found to be significant only in VOI:M1 although both areas presented the same trend.Trabecular separation seems to be a less sensitive parameter of bone remodeling and was found stable even after 3 or 4 weeks of OTM in rats 16,43 .Bone regrowth on the tension side resulted in a constant Tr.Sep 36 or reduction 14 leading to structures with smaller pores.Our observation indicates that rather a bone loss mechanism is prevalent in both studied regions. For a more complete view about the changes in periodontium during orthodontic stimuli, this study emphasizes the importance in the precise evaluation of the periodontal ligament.The PDL assumes a regulation function for the force transfer as a bone-PDL-tooth joint-like system 49 .An optimal PDL-space biomechanically permits the effective stress redistribution from the tooth to the adjacent tissue.Despite the fact that the exact role of the PDL tissue in bone remodeling under mechanical load hasn't been fully understood yet, there is a consensus about the high importance of PDL tissue in such processes as well as about its function as stress absorber and redistributor to the adjacent alveolar bone 41 .Mechanical response and remodeling factors inside the PDL region are expected to be strongly influenced by the heterogeneity of the PDL fiber density and vitality, and thus by the variability of the stress-distribution microregions 50 .The previous analyses of the PDL system indicate the complexity of this collagenous system and its reactions to external stimuli 51 .Still, the mechanism of the PDLspace response to variable magnitudes, directions and durations of loading needs to be examined in more detail.Therefore, the understanding of PDL tissue remodeling as a result of force application is an important progress factor for the orthodontic treatment strategy. Often, bone morphology and PDL shape are studied only qualitatively and only on selected 2D images missing the complete information from the whole bone and PDL volume 12,14,21,25,29,30,38,43 .Still, for an accurate comparison, the exact position and orientation of the analyzed 2D sections in the control and in OTM treated side is crucial.A volumetric registration of the studied periodontium region shall be an important pre-requisite for such comparative evaluations of PDL thickness.Manually oriented data hinders the validity of the results through the influence of the sectioning angle as well as the position along the molar roots on the estimated PDL thickness.Additionally, the coordinates in which the PDL thickness is measured in 2D are often based on the assumed vector of the orthodontic force applied on the molar tooth.However, our data, confirmed by previous findings 31,33 , demonstrate the complexity of tooth movement during long term OTM and the simultaneous variability of the force direction during the bone and PDL remodeling process.Therefore, a simple orthodontic force definition without an exact estimation of tooth movement and stress distribution may fall into misleading results. Moreover, the changes in PDL form found in literature are mostly in agreement with our results of PDL thickening due to orthodontic stimuli.An increase in PDL thickness was also detected by Li et al. 43 in rats only after 21 days of OTM.Significant PDL thickening on the compression side of the distobuccal root of M1 was also observed in mice after 12 days of OTM 25 .A thickened PDL was also found in 1st molar of C57BL/6 wild type mice after occlusal trauma 30 .A strong widening of PDL in both pressure and tension side after 4 and 2 weeks of OTM, respectively, was shown in rats on histology by Hundson et al. 38 and Dorchin et al. 20 .Nevertheless, only a qualitative evaluation was performed in the mentioned study.Dynamic changes in the PDL form were observed by Laura et al. 29 .Here, the PDL showed a thickening tendency on the tension side and a thinning on the opposite compression side in the first 3 days of treatment.Though, the original PDL thickness was recovered within the next 2 weeks.Such findings go along with the compressed PDL on the compression side and extended PDL on the tension side observed by Shalish et al. 12 after only 4 days of mechanical force application.Both findings indicate the asymmetric changes in PDL thickness caused mainly by tooth movement in the second stage of OTM and assuming no or only negligible effect of rotational forces.Unfortunately, neither work provides verification whether such PDL deformation proceeded also in other root sections. A full volumetric evaluation of PDL is provided only in a few studies on OTM in mice [32][33][34] .Here, the PDL thickness was determined by an algorithm that defines the shortest distance from each pixel of the root surface to the surrounding bone surface.For a correct results interpretation, it is important to note that without virtual 'closing' of the PDL-bone contact surface, such algorithm also defines the distorted areas and 'broken pores' in the alveolar bone as a wider PDL region.Still, the widening of the PDL thickness found in our study, is in agreement with the observations in these studies 32,33 .A volumetric approach was also applied in the study by Wolf et al. 21, where the effect of PDL region extension after mechanical stimuli was confirmed and demonstrated by the significantly lower bone volume inside the 100-µm thick VOI around the molar roots in mice after 11 days of OTM.We note that the additional cementum loss and tooth resorption effect may also contribute to the higher estimated PDL volume after orthodontic treatment 2,21 . Considerably a similar trend in the changes of PDL thickness and volume found in our study indicate that the PDL deformation on the side with more compression force is not compensated by its extension in the tension force areas after such long application of orthodontic stimuli.Rather the whole PDL system is deformed and remodeled.Our findings imply that the observed changes within the PDL are not solely due to the dislocation of tooth in the alveolar socket and they do not correlate with the tension and compression side after such long mechanical load.Still, more studies are needed to answer the resulting questions, such as what happens with the PDL and alveolar bone reformation after the orthodontic force is removed; how is the alveolar bone and PDL remodeling influenced by different forces; or how is the length and the process of bone and PDL recovery to its original form at different applied loadings.This break-through study was conceived to show the first evidence about an extended time span-remodeling which was successfully reached through clear significant differences in several morphological parameters even in a relatively small sampling.Future studies with higher sample numbers and analysis of periodontal ligament and microstructures of alveolar bone after removing the appliances may be essential for the better understanding of this phenomena and its application in clinical research. Conclusion In this study, we provide a precise evaluation of alveolar bone and PDL-space remodeling after a uniquely long orthodontic treatment of 5 weeks in mice.Parallel bone remodeling trends were found in two regions adjacent to the orthodontically treated molar tooth, indicating a similar process independent on the direction of the original orthodontic load.Such conclusion may also be taken from the extended PDL-space after this long OTM treatment.For a deeper understanding of the bone and PDL remodeling process during the OTM treatment, we propose to estimate the dimensional factors of tooth movement prior to an extended volumetric tissue morphology evaluation.Beside the precise and reproducible definition of the studied alveolar bone or PDL area, the comprehensive analysis of periodontal ligament in 3D is essential. Figure 1 . Figure 1.Schematic representation of the algorithms for the estimation of tooth movement and setup for the reproducible and identical localization of the studied VOIs.(A) Firstly, the CC sides (white) were registered in 3D to the CC-reference (blue) according to M2-M3 region.(B) Following, these CC sides (white) were geometrically flipped to FCC (rose) over the depicted sagittal plane in transverse view.(C) OTM sides (green) were registered to the FCC (rose) according to the M2-M3 region.(D) The tooth movement was defined from the further 3D-registration of OTM to FCC according to M1 region, as the difference between the final OTM position and its position in (C). Figure 2 . Figure 2. Tooth movement during a long OTM treatment.(A) Schematic representation based on microcomputed tomography scans showing 3-dimensional sagittal and occlusal views of an orthodontic appliance used in mouse.A depiction of the Ni-Ti coil (COIL) attached from upper incisor (INC) with composite (COMP) and to the first molar (M1).The region of the first to the third molar (M1, M2, M3) was used for analysis.The lower view of the maxilla shows that the left side served as a control (CC) and the right side was orthodontically treated (OTM).(B) A complexity of the tooth movement during a long-term OTM is shown on two examples.A strong rotational factor is visible on the red-colored sample after 5 weeks OTM in comparison to the white-colored flipped CC.A prevalent translational movement was found in one sample-blue-colored OTM in comparison to the white-colored flipped CC.The axes are demonstrated by the colored arrows and were defined as: x (red)-from buccal to palatal; y (blue)-from distal to mesial; z (green)-from vertical (from apical to occlusal).The translational movement and rotation around the x and z axes are symbolized by the white arrows.(C) The non-conformity of the translational movement after 5 weeks of OTM is depicted in the large deviations of the average translation in all three axes.The complexity of the tooth relocation is confirmed by the non-zero rotation angle with strong deviations, mainly around the x and z axis (γ and α, resp.). Figure 4 . Figure 4. Changes of bone loss and thinning of the alveolar bone during OTM.(A) 3D rendering of the bone structure inside VOI:M1 with color-mapped trabecular thickness representation (Tr.Th) shows clear differences in the density of the thicker trabeculae for both groups.The color-mapped scale presents the distribution of Tr.Th in µm.(B) An average volume distribution of the trabecular thickness in VOI:M1 after 5 weeks of OTM covers mainly smaller Tb.Th values when compared to the non-treated side (control)-thinning of the bone structure.(C) The average trabecular thickness (Tr.Th) of the alveolar bone inside M1 region was also strongly reduced, **p < 0.01. Figure 5 . Figure 5. Changes in trabecular separation during OTM.(A) Color-mapped presentation of the trabecular separation (Tr.Sep) distribution inside VOI:M1 indicates a strong increase in the density and dimension of the pores after 5 weeks of OTM.The color-mapped scale shows the Tr.Sep distribution in µm.(B) The average volume distribution of Tr.Sep in VOI:M1 were confirmed with visibly lower small pores amount, and more middle-sized pores appeared in the treated side comparing to the control.(C) The average Tr.Sep was also found to be significantly higher in the OTM group, **p < 0.01. Figure 7 . Figure 7. Periodontal ligament complex expansion following OTM.(A) Widening of the PDL layer as seen on the PDL thickness distribution depicted on the color-mapped 3D rendering images.The color-mapped scale bar shows the PDL thickness in µm.(B) The volumetric distribution of PDL thickness where a relatively thin distribution of the control group spreads towards the higher values of PDL thickness.(C) t-test confirmed the significant increase in PDL thickness, ***p < 0.001.
2023-11-16T06:18:24.308Z
2023-11-14T00:00:00.000
{ "year": 2023, "sha1": "523ffe0d4cafac42e634c49e80ffbe8e8a363846", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-023-47386-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b1d6a96cc13402e5fef7fdb2b4a0f71423513d28", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219156772
pes2o/s2orc
v3-fos-license
PTH(1–34) treatment and/or mechanical loading have different osteogenic effects on the trabecular and cortical bone in the ovariectomized C57BL/6 mouse In preclinical mouse models, a synergistic anabolic response to PTH(1–34) and tibia loading was shown. Whether combined treatment improves bone properties with oestrogen deficiency, a cardinal feature of osteoporosis, remains unknown. This study quantified the individual and combined longitudinal effects of PTH(1–34) and loading on the bone morphometric and densitometric properties in ovariectomised mice. C57BL/6 mice were ovariectomised at 14-weeks-old and treated either with injections of PTH(1–34); compressive loading of the right tibia; both interventions concurrently; or both interventions on alternating weeks. Right tibiae were microCT-scanned from 14 until 24-weeks-old. Trabecular metaphyseal and cortical midshaft morphometric properties, and bone mineral content (BMC) in 40 different regions of the tibia were measured. Mice treated only with loading showed the highest trabecular bone volume fraction at week 22. Cortical thickness was higher with co-treatment than in the mice treated with PTH alone. In the mid-diaphysis, increases in BMC were significantly higher with loading than PTH. In ovariectomised mice, the osteogenic benefits of co-treatment on the trabecular bone were lower than loading alone. However, combined interventions had increased, albeit regionally-dependent, benefits to cortical bone. Increased benefits were largest in the mid-diaphysis and postero-laterally, regions subjected to higher strains under compressive loads. Methods Animals and treatment. Twenty-four virgin female C57BL/6 mice were purchased at 13-weeks old (Charles River UK Ltd., Margate, UK). Mice were housed, four per cage, in The University of Sheffield's Biological Services Unit at 22 °C, with a twelve-hour dark/light cycle and ad libitum access to 2918 Teklad Global 18% protein rodent diet (Envigo RMS Ltd., UK) and water. All the procedures were performed under a British Home Office licence (PF61050A3) and in compliance with the Animal (Scientific Procedures) Act 1986. This study was reviewed and approved by the local Research Ethics Committee of The University of Sheffield (Sheffield, UK). The findings and experiments in this paper were designed and reported in accordance with the ARRIVE guidelines 35 . C57BL/6 female mice were chosen due to documented skeletal responsiveness to mechanical loading, PTH or OVX 27,29,31,36 . Peak cortical bone mass was reported in the appendicular skeleton of female C57BL/6 mice at 3-4 months of age 37 , thus the mice herein were considered to be skeletally mature at the onset of this study (14 weeks of age). An a priori estimate of sample size based on large loading effects on the trabecular bone volume fraction and cortical thickness after six weeks of loading in PTH-treated mice 27 , indicated that six mice per group was necessary to achieve 80% statistical power and assuming Cohen's d = 2, α = 0.05. At age 14 weeks, and following one week acclimatization, all mice underwent OVX and remained untreated for 4 weeks following surgery to allow oestrogen-deficiency related bone loss 36 . OVX mice were randomly assigned into 4 treatment groups (n = 6 mice/group) and then treated, per schedule in Fig. 1(A), with either (1) PTH between weeks 18 and 22, subgroup "PTH", (2) mechanical loading during weeks 19 and 21, "ML", (3) concurrent treatment with PTH(1-34) and mechanical loading, "ML + PTH", (4) weekly alternating treatment with PTH(1-34) during weeks 18, 20 and 22 of age and mechanical loading during weeks 19 and 21, "ML + PTH alt ". All mice were withdrawn from treatment for the final two weeks of the study (weeks 23 and 24 of age). We confirmed treatment effects of PTH and mechanical loading by comparing bone properties within the same animals before the treatment started (i.e. relative to week 18 values) and with a group of age-matched C57BL/6 ovariectomized mice from a previous study in our laboratory 36 . (Bachem, Bubendorf, Switzerland) at 100 µg/kg/day 29,31 , 5 days/week (groups: PTH, ML + PTH) or vehicle (group: ML). PTH was prepared in 1% acetic acid and 2% heat inactivated mouse serum in HBSS 31 . Mechanical loading. A minimally invasive method was used for uniaxial compressive loading of the right tibia per a previously published protocol 38 . Briefly, the flexed knee and ankle were fixed between two soft cups and the tibia loaded along the superior-inferior axes to a peak load of 12 N. Tibiae were loaded to 12 N peak by superimposing a dynamic load of 10.0 N upon a static 2.0 N preload at a rate of 160,000 N/second. Forty trapezoidal waveform load cycles were applied (held for 0.2 seconds at 12 N) with a 10 second interval between each cycle. A 12 N load was previously shown to promote significant bone apposition in female C57BL/6 mice without impairing mobility following treatment 38 . Mechanical loading was applied to all mice in groups ML, ML + PTH and ML + PTH alt , three days per week (Mon, Wed, Fri) at weeks 19 and 21. In-vivo microCT imaging. The whole right tibia of each mouse was imaged in vivo with microCT (VivaCT80, Scanco Medical, Bruettisellen, Switzerland). A baseline scan (before OVX surgery) was performed at 14 weeks of age, then follow up in vivo scans performed every two weeks until week 22 (Fig. 1). At week 24 mice were euthanized by cervical dislocation and both the left and right tibia were imaged ex vivo using the in Image alignment and preprocessing. From each reconstructed microCT image two analyses were performed ( Fig. 1(B)): standard 3D morphometric analysis as defined in the guidelines of the American Society of Bone and Mineral Research (ASBMR) 39,42 and a spatial densitometric analysis 31 . The image of one tibia from one mouse at week 14 of age was randomly chosen and used as a reference. The longitudinal axis of each reference tibia was approximately aligned with the z-axis of the global reference system 43 . All remaining images (from different mice and different time points) were rigidly registered to the reference images prior to the below image analyses. The rigid registrations were performed by using a Quasi-Newton optimizer and the Normalised Mutual Information as the similarity measure (Amira 5.4.3, Thermo Fisher Scientific, France) 44 . The registered grayscale image datasets were smoothed with a Gaussian filter (convolution kernel [3 3 3], standard deviation = 0.65) in order to reduce the high frequency noise and bone voxels were defined using a global threshold, which was calculated as the average of the grey levels corresponding to the bone and background peaks in each image histogram (frequency plot) 39 . Intraperitoneal PTH(1-34) injections. Mice received either intraperitoneal injection of PTH Standard 3D morphometric analysis. For trabecular bone analysis a region of interest (ROI) of 1 mm height was selected, 0.3 mm below a reference line defined as the most distal image slice that included the growth plate and adapted from previous research 42, 45 . This was necessary to minimize analysis of the newly formed (modelled) trabeculae emerging from the growth plate due to continuous longitudinal growth in rodents 46 . For cortical bone analysis a region of 1 mm height was selected in the tibia diaphysis and centred at 50% of the tibia bone length 43 . ROIs in the trabecular and cortical bone were manually marked and the following 3D bone parameters were computed (CT Analyser v1.18.4.0, Skyscan-Bruker, Kontich, Belgium): trabecular bone volume fraction (Tb.BV/TV), trabecular thickness (Tb.Th), trabecular separation (Tb.Sp) and trabecular number (Tb.N); cortical total cross-sectional area (Tt.Ar), cortical bone area (Ct.Ar), cortical area fraction (Ct.Ar/Tt.Ar) and cortical thickness (Ct.Th). In the midshaft cortical ROI, minimum (I min ) and maximum (I max ) principal moments of inertia, polar moment of inertia (J) and eccentricity (Ecc) were computed. Spatiotemporal densitometric analysis. Densitometric properties were estimated in multiple regions within the tibia adapting a previously described procedure 31 . Briefly, the length of each tibia (L) was measured at each time point, computed as the distance between the most proximal and distal bone voxels in the registered image stack, and a region 80% of L was cropped starting from the section below the growth plate (MatLab, 2018a, The MathWorks, Inc. USA). The tibia was divided longitudinally into ten transverse sections (from most proximal, section C01 to most distal, section C10) with the same thicknesses (i.e. 8% of L) and each section was then divided into quadrants (anterior, medial, posterior and lateral sectors) for a total of 40 ROIs across the length of the tibia. Anterior, medial, posterior and lateral compartments were defined by two perpendicular lines passing through the centre of mass of each slice. Bone mineral content (BMC) and tissue mineral density (TMD, mg HA/cm³) were measured in each of the 10 sections and in each of the 40 quadrants. This approach provides a reasonable compromise between the measurement spatial resolution along the tibia length (number of sections) and the densitometric measurement reproducibility, while accounting for small but still present growth of the tibia between weeks 14 and 24 of age 43,47 . TMD in each voxel was obtained from its grey level by using the calibration curve provided by the manufacturer of the microCT scanner. The BMC was calculated in each voxel as TMD multiplied by the volume of the voxel. The BMC in each compartment was calculated as the sum of BMC in each bone voxel, while TMD in each compartment was defined as the ratio between BMC and the bone volume (BV) 36 . Statistics. All morphometric and densitometric properties were tested for assumptions of normality (Shapiro-Wilks test), homogeneity of variance (Levene's Test) and sphericity (Mauchly's Test). To determine whether anabolic treatments reverse OVX-induced trabecular bone loss and cortical bone adaptations, data were analysed by two-way mixed Analysis of Variance (ANOVA). Where for a given bone property the F values were significant for a 'time by intervention' interaction, the simple "time effect" was investigated using paired t-tests between (1) treatment baseline (week 18) and proceeding time-points (week 20-24) and (2) between sequential time-points (e.g. week 20-22, 22-24 comparisons) 36,48 . Between-group differences in bone properties due to treatment and treatment withdrawal (i.e. at weeks 20-24) were analysed using Analysis of Covariance (ANCOVA), adjusted for values at 18-weeks-old (treatment onset) and with post hoc pairwise comparisons (Bonferroni-adjusted for six comparisons among treatment groups). Adjustment for week 18 values mitigates bias due to potential differences in the bone properties at the onset of treatment. Statistical significance was set at α = 0.05. All analyses were performed using SPSS Statistics 25 (IBM Corp., Armonk, NY, USA). Data are presented as mean ± standard deviation (SD) unless otherwise specified. The percentage change in morphometric properties were computed per Eq. (1), where "BP" is the mean bone property value (e.g. of Tb.BV/ TV, Ct.Th) and "i" defines a subsequent time point (weeks 20-24): Changes of tibia densitometric properties are presented as the mean relative percentage difference between the two treatment groups, normalized for the baseline values of the second group (week 14). See the Supplementary Materials for computation of mean relative percentage difference as per Lu et al. 31 . Results All mice completed this study without complications. One mouse in the ML + PTH alt group was removed from densitometric analysis as reconstruction of the image data in the distal tibia failed at baseline, but this did not affect morphometric analysis. Data collected in this study are accessible at https://doi.org/10.15131/shef. data.12292787. 36 ). In PTH, a small and transient, albeit non-statistically significant, increase in Tb.BV/TV was observed at week 20 (12% increase relative to week 18) corresponding with a significant increase in Tb.Th (+27%, p=0.002). Tb.BV/TV returned to baseline at week 22 (2% reduction relative to week 18). In ML, ML + PTH and ML + PTH alt , Tb.BV/TV values were significantly higher at week 22 than week 18 (+66-89%, p < 0.01), attributed to a significant increase in Tb.Th (+57-62%, p < 0.02). Individual or combined treatments with mechanical loading did not improve Tb.Sp nor Tb.N, relative to week 18 values. In cortical bone, a detectable (significant) change in morphometric properties was observed within two weeks from treatment onset (week 20). With loading (individually or combined with PTH), a significant and persistent increase in Ct.Ar and Ct.Th was observed from weeks 18 to 20 to 22 (6-20% increase/week, p < 0.01), but only from weeks 18 to 20 with PTH alone (+17-18%, p < 0.001). Effects of treatments and withdrawal on the trabecular and cortical bone morphometry. Significant intervention effects were observed among treatment groups ( Fig. 2). At week 22, Tb.BV/TV, adjusted for week 18 values, significantly differed among all treatment groups and was higher in ML, ML + PTH and ML + PTH alt than PTH (76-148% higher, p < 0.05), and higher in ML than ML + PTH and ML + PTH alt (14-60%, p < 0.05). Tb.Th and Tb.N were significantly lower in PTH than all other groups (−20%, −22%, and −24% for Tb.Th, and −58%, −50%, and −50% for Tb.N compared to ML, ML + PTH, and ML + PTH alt , respectively); and Tb.N was 17% lower with combined treatments than ML. In cortical bone, Ct.Th was 14-16% higher with combined treatment than PTH. Ct.Ar was 12% higher in ML + PTH than PTH. www.nature.com/scientificreports www.nature.com/scientificreports/ Significant changes in trabecular bone morphometry were observed following treatment withdrawal (Table 1 and Figs. 2 and 3). In PTH, a significant reduction in Tb.BV/TV (31%) and Tb.Th (17%), with corresponding increase in Tb.Sp (8%) was observed between weeks 22 and 24; Tb.BV/TV reductions being similar to change in untreated OVX mice. With combined treatment, significant thinning of the trabecular bone was observed, corresponding with a significant reduction in Tb.BV/TV in ML + PTH alt . In ML, changes in trabecular morphometry were not significant following treatment withdrawal. No significant change in cortical bone morphometry ( Table 2) was evident between weeks 22 and 24 in PTH, nor ML. In ML + PTH and ML + PTH alt , a significant increase in the Tt.Ar persisted following treatment withdrawal. With PTH and/or loading, a persistent increase in the cortical midshaft moments of inertia (I max , I min and J) and eccentricity were observed (10-49% increase at week 22 compared with week 18 values, p < 0.05), which were retained two-week following treatment withdrawal (Supplementary Table S1). Whereas, in untreated mice, no change in bone properties were observed with ovariectomy over time. At week 22, I max and J were significantly higher in ML + PTH than PTH alone (Fig. S4). www.nature.com/scientificreports www.nature.com/scientificreports/ Effects of treatment on the bone densitometric properties. At week 20, BMC significantly increased between 7% and 26% along the tibia length in all treatment groups (Fig. 5). With PTH, a significant and persistent increase in BMC, from weeks 18-20-22, were observed in the most proximal (C01, 25% increase from week 18 to 22), the mid (C03, + 17%) and in the distal tibia (C06-C10, + 23 to +25%). In ML and co-treated mice, persistent increments in BMC (from 18 to 20 to 22) were observed in the proximal to mid-tibia (C01-C08, + 17 to +45%). BMC remained above week 18 values following treatment withdrawal. Sub-regionally, the greatest osteogenic benefits of loading and co-treatment were observed posteriorly and laterally, particularly in the most proximal (C01, up to +71%, Fig. 6) and mid-tibia (C03-C06; up to +63%), whereas in PTH a more homogeneous response among quadrants was observed. Every treatment increased slightly the TMD (up to 7% in the most proximal region of the PTH treated mice at week 22 of age) in most regions of the tibia (Fig. 5). While PTH increased the TMD homogeneously across the tibia and among all quadrants, for all loaded mice the central portion of the tibia was less affected particular in the anterior and medial regions (Fig. S6). In most cases small effects on TMD were maintained after treatment withdrawal. Treatment effects differed among groups (Fig. 7). In ML, the increase in BMC was significantly higher proximally (C01; 10% difference) than PTH at week 22. With PTH, the increase in BMC was significantly higher in the distal tibia (C08-C10; 5-13% difference) than ML at weeks 20 and 22, with differences persisting following treatment withdrawal (week 24; C08-C10; 9 to 14%). By subregional analysis, greater osteogenic benefits of ML than PTH were observed in the mid-tibia postero-laterally, whereas PTH had greater benefit to the medial and more distal portions of the bone (Fig. S5, Supplementary Materials). For TMD, the differences were small (less than 5%) and significantly differed only between PTH and combined treatment groups in the most proximal tibia region (C01; 4% lower in the combined treatments compared to PTH, Fig. 8). Discussion In this study we quantified for the first time the longitudinal effects of PTH(1-34) and mechanical loading on bone morphometric and densitometric properties in an ovariectomised mouse model of osteoporosis. The results herein suggest a dominant effect of mechanical loading compared to injections of PTH, with increased and regionally-dependent benefits of combined treatments to the tibia cortical bone, but limited benefits for the trabecular bone. PTH monotherapy had no significant anabolic benefit to metaphyseal trabecular bone consistent with neutral effects in intact mice 29,31,49 , but in contrast to increasing bone mass shown in OVX rodents elsewhere 17,19,20 . PTH www.nature.com/scientificreports www.nature.com/scientificreports/ inhibited further OVX-induced bone loss by trabecular thickening, a characteristic adaptive response 17,19,20 , in the presence of declining trabecular number. Interestingly, in 50% of the mice examined the trabecular changes were characterised by a small increase in Tb.BV/TV to week 20 and then bone loss thereafter (Supplementary Materials, Fig. S1), supporting a transient response to ongoing treatment described cross-sectionally 17 . Anabolic effects of PTH depend on its ability to stimulate osteoblast, osteocyte and osteoclast activities 50 . While typically favourable to osteoblastic activity, benefits may be compromised in mice with very low (<5%) baseline Tb.BV/TV as in C57BL/6 mice herein and reported elsewhere 17 . This is in line with the limited efficacy of PTH in the less trabecular rich femoral neck, relative to benefits in the lumbar vertebra shown clinically 51 Fig. 1(a), at the beginning of week 18 and was withdrawn at the end of week 22. +Tb.Sp was significantly higher in PTH than in ML (p=0.013) and ML + PTHalt (p=0.036) at the onset of treatment. Remaining morphometric parameters did not significantly differ among the four treatment groups at week 18 following randomisation. ‡ 3D morphometry of untreated ovariectomized mice (group "OVX") and intact controls ("CTRL") from Roberts et al. 36 are reported for comparison of trends in bone adaptation. Scientific RepoRtS | (2020) 10:8889 | https://doi.org/10.1038/s41598-020-65921-1 www.nature.com/scientificreports www.nature.com/scientificreports/ et al. 19 report, in OVX rats, strong positive interrelationships between baseline Tb.BV/TV and bony adaptations to PTH, though relationships were not confirmed by our current data (Tb.BV/TV 18 vs. ∆Tb.BV/TV [18][19][20][21][22] ; Spearman's ρ=−0.657, p=0.156, see Supplementary Materials). In the midshaft, PTH treatment lead to an immediate increase (from week [18][19][20] in cortical thickness and bone area consistent with cross-sectional findings on OVX mice 17,20 , although osteogenic benefits, except in Tt.Ar, desisted thereafter. This finding in Ct.Th is contrary to the constant linear increase observed in OVX rats 19 , and in 19-months-old intact mice where PTH exacerbated age-related thinning of the cortical bone over time 29 . PTH had relatively homogeneous benefits along the bone length, increasing BMC with constant benefits to the mid-to distal tibia, and contrary to intact mice, where benefits propagated proximal to distally and in postero-medial sectors 31 . *treatment commenced, as per Fig. 1(A), at the beginning of week 18 and was withdrawn at the end of week 22. Morphometric parameters did not significantly differ among the four groups at onset of treatment (p > 0.05). ‡ 3D morphometry of untreated ovariectomized mice (group "OVX") and intact controls ("CTRL") from Roberts et al. 36 are reported for comparison of trends in bone adaptation. Scientific RepoRtS | (2020) 10:8889 | https://doi.org/10.1038/s41598-020-65921-1 www.nature.com/scientificreports www.nature.com/scientificreports/ Tibia loading had anabolic benefits to both the secondary trabecular and cortical bone shown previously in intact and orchidectomised mice 23,24,27,29 . In age-matched C57BL/6, OVX and age-related changes in trabecular metaphyseal bone are characterised by a significant decline in Tb.BV/TV (12-31% bone loss from 18-to 22-weeks-old) 36 . Whereas, bone adaptations after loading were characterised by a persistent increase in Tb.BV/ TV (89% increase from week 18 to 22) due to trabecular thickening particularly in posterior regions (Fig. 3). In the mid-tibia, loading increased the cortical thickness and total cross-sectional area, and led to a constant increase in BMC in the proximal to mid-tibia, agreeable with cross-sectional 23,24,27,52 and recent longitudinal findings 53 in intact mice. Notably, the largest loading induced increase occurred posterior to laterally at the mid-shaft (Figs. 4 and 6), consistent with higher bone formation and decreased resorption processes at the periosteal surface of these sites documented elsewhere, and where the compressive strains are greatest under uniaxial load 54 . Compared with PTH, loading showed greater benefits to trabecular, but not cortical bone morphometry, except a higher cortical thickness two weeks after treatment withdrawal. PTH had greater benefit to distal BMC, whereas loading was more beneficial to the proximal and mid-tibia, consistent with heterogeneous strain distribution in this loading model 55 . Both concurrent and alternating co-treatment had anabolic benefits to morphometric and densitometric properties of the mouse tibia. In general, owing to dominant loading effects as discussed above, combined treatment induced longitudinal adaptations in morphometric properties similar to loading alone, e.g. increased Tb.BV/TV with trabecular and cortical thickening. Interestingly, with co-treatment, PTH appeared to limit the osteogenic benefits of loading on the trabecular bone, confirming a possible antagonistic interaction on metaphyseal trabeculae observed cross-sectionally in intact 19-months-old C57BL/6 29 , though contrary to additive benefits on both appendicular or axial trabecular bone observed in 3-4 months-old intact 27 and OVX C57BL/6 mice 15 . Combined treatments had increased benefits to cortical thickness than PTH alone, but not loading monotherapy which is consistent with short (2 weeks), but not with prolonged (3-6 weeks) tibia loading previously shown 27,29 . In BMC, PTH(1-34) enhanced loading effects in the proximal tibia, particularly in posterolateral regions that are subjected to higher compressive strain under controlled mechanical load, while conferring osteogenic benefits to the distal portion where mechanical effects were low. Compared with PTH, loading had increased benefits only to the proximal tibia and confirmed by site-specific analysis of cortical morphometry elsewhere 29 . Meanwhile, alternating PTH lead to a lower anabolic response at 22 weeks of age in the most proximal part of the tibia (significantly lower Tb.BV/TV and in C01-C04 BMC compared with ML + PTH). An appropriate animal model of the human disease is recommended for preclinical testing of novel anti-osteoporotic treatment strategies. In C57BL/6, OVX-induced changes are characterized by rapid and www.nature.com/scientificreports www.nature.com/scientificreports/ persistent bone loss with concomitant reductions in circulating oestrogen 36 , the latter of which is not typical in aging rodents, though are cardinal features of human OP 56 . We selected skeletally mature, yet relatively young, mice to quantify adaptive response in absence of aging and related comorbidities that could confound the findings, though aging can affect the bone mechano-adaptation and responsiveness to co-therapy 29 , thus should be considered in future studies. With ovariectomy, our data highlights generally positive effects of combined bone anabolics to the cortical bone, but potentially antagonistic effects to trabecular bone in this mouse model. This www.nature.com/scientificreports www.nature.com/scientificreports/ variable response along the tibia, and often contrary to outcomes elsewhere, highlights the need for caution when extrapolating findings from young or old intact animals or otherwise positive anabolic benefits reported in axial bones 15 . Using comprehensive subregional assessment with longitudinal study design our results also provide meaningful additional information on bone's dynamic response to treatments that can be underrepresented by standard morphometric analyses. For example, characterization of mid-shaft cortical morphometry failed to capture the increased and highly region-dependent benefits of combined PTH and loading that we demonstrate by BMC partitioning in quadrants along the bone length. This spatial analysis, applied to high-resolution in vivo microCT images, represents important methodological refinements by contributing to a substantial reduction in the number of animals used for preclinical assessment of novel anti-osteoporotic treatment strategies 32 . Further, the longitudinal data could provide invaluable information for mechanistic models of bone remodeling with anabolic therapies, e.g. references [57][58][59] . There were limitations to this study. First, PTH(1-34) was administered approximately 2-3 hours following loading. While clinically, timing of the PTH(1-34) dose, e.g. morning than in evening, can enhance its efficacy 51 , the timing to optimise treatment synergies is yet to be resolved. Regardless, increased benefits were still shown and post-loading administration may be clinically relevant given drug side-effects, e.g. cramping and nausea/dizziness, which may be contraindications to exercise 8 . Second, the applied (12 N) load was matched across time-points and the intervention groups. Due to the PTH effects on cortical morphology, given at week 18 and one week before the first application of mechanical load, potential differences in local strains among the treatment groups may occur. Nevertheless, injections of PTH have shown not to significantly affect the cortical bone and to induce only small differences (7-9%) in BMC change in the proximal medial and posterior sectors 31 . Thus, the difference in local strain under the same axial load for the different groups of mice in week 19 should be minimal. Third, the in vivo study design precludes microCT scanning at smaller voxel size without increasing the radiation dose. Thus, we could not reasonably evaluate the effects of treatment on intra-cortical remodeling given that the mean cortical pore diameter in C57BL/6 mice is often less than the voxel size (i.e. <10 µm) 60 . Fourth, although sufficiently powered per our a priori sample size estimation, the heterogeneous response of the mice (see Supplementary Materials) may confound group trends and limit our ability to detect further significant intervention effects. However, the longitudinal design is advantageous to reduce risk of study bias while improving statistical power 32 . Finally, C57BL/6 mice, particularly following OVX have very low trabecular bone mass and mechanical loading on tissue mineral density in 10 sections along the tibia length in ovariectomized C57BL/6 mice. Sections are C01, most proximal to C10, most distal. Values are reported as the relative percentage difference between two treatment groups (g1 vs. g2), normalised for week 18 values of the latter group (g2). *p < 0.05, indicates statistically significant differences between groups (ANCOVA, adjusted for baseline values at week 18). Positive and negative values indicate greater increases in BMC in g1 and g2, respectively. www.nature.com/scientificreports www.nature.com/scientificreports/ at treatment onset. Thus, in the tibia metaphysis there is often very few trabeculae on which to reliably assess treatment efficacy. In conclusion, combining PTH(1-34) and tibia loading has increased, albeit highly regionally-dependent, benefits to the tibia cortical bone properties in ovariectomized mice, whereas co-treatment had lower osteogenic benefits on the trabecular bone than loading alone. While PTH(1-34) has relatively homogeneous benefits along the tibia length, loading increased BMC more focally in the mid-diaphysis and postero-laterally, which is subjected to higher stresses and strains under compressive loads. This data reinforces the need for comprehensive spatial analysis along the bone length when testing effects of novel treatment strategies.
2020-06-01T14:08:09.644Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "f5f143d5e82fac48ab9ceb699ac9166189a56c6a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-65921-1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f5f143d5e82fac48ab9ceb699ac9166189a56c6a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9415772
pes2o/s2orc
v3-fos-license
Henoch-Schonlein Purpura in Children Hospitalized at a Tertiary Hospital during 2004-2015 in Korea: Epidemiology and Clinical Management Purpose To investigate the epidemiology, clinical manifestations, investigations and management, and prognosis of patients with Henoch-Schonlein purpura (HSP). Methods We performed a retrospective review of 212 HSP patients under the age of 18 years who were admitted to Inje University Sanggye Paik Hospital between 2004 and 2015. Results The mean age of the HSP patients was 6.93 years, and the ratio of boys to girls was 1.23:1. HSP occurred most frequently in the winter (33.0%) and least frequently in the summer (11.3%). Palpable purpura spots were found in 208 patients (98.1%), and gastrointestinal (GI) and joint symptoms were observed in 159 (75.0%) and 148 (69.8%) patients, respectively. There were 57 patients (26.9%) with renal involvement and 10 patients (4.7%) with nephrotic syndrome. The incidence of renal involvement and nephrotic syndrome was significantly higher in patients with severe GI symptoms and in those over 7 years old. The majority of patients (88.7%) were treated with steroids. There was no significant difference in the incidence of renal involvement or nephrotic syndrome among patients receiving different doses of steroids. Conclusion In this study, the epidemiologic features of HSP in children were similar to those described in previous studies, but GI and joint symptoms manifested more frequently. It is essential to carefully monitor renal involvement and progression to chronic renal disease in patients ≥7 years old and in patients affected by severe GI symptoms. It can be assumed that there is no direct association between early doses of steroids and prognosis. INTRODUCTION Henoch-Schonlein purpura (HSP) is the most common type of vasculitis in children [1,2]. Patients with mild clinical symptoms respond well to conservative treatment, but steroids are often required to improve severe abdominal pain and joint symptoms [3][4][5]. Renal involvement is a well-known pre-Pediatr Gastroenterol Hepatol Nutr dictor of poor prognosis, but the rate of progression to chronic renal disease is lower in children than in adults [6][7][8][9]. Hospitalization is often reserved for those cases complicated by severe abdominal or joint pain, and in those patients with proteinuria due to renal involvement. Most cases of HSP can be confidently diagnosed in the presence of the characteristic purpuric spots, but in cases that manifest as severe abdominal pain in the absence of purpura, obtaining a diagnosis can be difficult [10,11]. The use of high dose steroids may be considered in HSP patients presenting with severe gastrointestinal (GI) symptoms (i.e., abdominal pain and hematochezia) or persistent proteinuria. If severe GI symptoms or nephrotic syndrome persist despite steroid treatment, then other treatment modalities such as intravenous immunoglobulin (IVIG), immunosuppressive drugs, or plasmapheresis may be used [12][13][14][15][16][17]. In this study, we investigated the epidemiologic characteristics and clinical courses of children hospitalized with HSP at a tertiary hospital between 2004 and 2015. MATERIALS AND METHODS We performed a retrospective review of the medical records of 240 children hospitalized with HSP (diagnosis code: D69.0) at Inje University Sanggye Paik Hospital between January 2004 and February 2015. Three children who were diagnosed at other hospitals before being transferred to Inje University Sanggye Paik Hospital were also included. A total of 212 patients were included in this study after exclusion of 54 patients due to the passage of more than 1 month from initial diagnosis at another hospital (14 patients), incomplete medical records (13 patients) and 27 due to incorrect coding (27 patients). The study protocol was approved by the Institutional Review Board of the Inje University Sanggye Paik Hospital (SP IRB no. 16-02-005). Diagnosis of HSP was based on American College of Rheumatology Classification Criteria [18]. Recurrence of HSP was defined as the recurrence of symptoms more than 1 month after remission. The data for sex, age, year and month of onset, presence of symptoms (skin, joint, abdominal, renal, and other organs), recurrence of symptoms, results of renal biopsy, dosage and duration of steroid treatment, duration of steroid tapering, IVIG treatment, results of endoscopic exams or radiological exams: abdominal ultrasonography (USG), small bowel series (SBS), computed tomography (CT), and blood tests: white blood cell (WBC) count, platelet count, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP), were collected by review of medical records. The visual analog scale generally used to evaluate abdominal pain is not applicable for young children with HSP. In this study, abdominal pain was classified into three groups as follows: 1) Severe abdominal pain group: requiring radiological examination to exclude intussusception or appendicitis, confirmation of intussusception, complications requiring surgical or endoscopic intervention, gross hematuria or hematochezia; 2) Mild abdominal pain group: showing GI symptoms not meeting the criteria of the severe group; and 3) No abdominal pain group. Analysis of steroid treatment was performed in 113 patients who were followed up over a one month period, after the exclusion of 14 patients without steroid treatment and 8 patients with extreme steroid-dosage (<0.5 mg/kg/day or >2.5 mg/kg/day). A total of 113 patients were divided into two groups based on their initial prednisolone dosage: 1) high-dose group (0.5 mg/kg/day-1.5 mg/kg/day), n=94 and 2) low-dose group (1.5 mg/kg/day-2.5 mg/kg/day), n=19. Statistical analysis was performed with SAS ver. 5.1 (SAS Institute, Cary, NC, USA) by chi-square test, Mann-Whitney test, one-way analysis of variances, logistic regression Demographic data A total of 121 patients were included in the study. HSP was more prevalent in males (male:fe-male=1.23:1) and the average age of onset was 6.93 HSP admissions were most common in the winter (33.0%) and least common in the summer (11.3%). The incidence of HSP hospitalization was similar in spring (28.3%) and autumn (27.4%). Clinical manifestation A purpuric rash was observed in 208 patients (98.1%) and joint symptoms were present in 148 patients (69.8%) ( Table 1). GI symptoms were present in 159 patients (75.0%), hematemesis in 5 (2.4%), hematochezia in 18 (8.5%), and intussusception in 2 (0.9%). No patients developed complications requiring surgical intervention. Renal involvement was present in 57 patients (26.9%), isolated hematuria in 21 (9.9%), isolated proteinuria not in the nephrotic syndrome range in 5 (2.4%), and co-existence of hematuria and proteinuria in 22 (10.4%). Ten patients (4.7%) experienced proteinuria within the nephrotic range as well as hematuria. HSP nephritis (HSPN) was diagnosed by renal biopsy in 8 of 10 HSP patients associated with nephrotic syndrome (renal biopsy was not performed in 2 HSP patients with nephrotic syndrome as one was transferred to another hospital and the other demonstrated spontaneous recovery of proteinuria and hematuria before renal biopsy). Scalp edema was observed in 9 patients (4.2%) and scrotal edema in 10 patients (4.7%). The difference of clinical manifestation according to age To investigate differences in clinical manifestations, treatment outcomes and prognosis of renal involvement according to age, patients were strati-Pediatr Gastroenterol Hepatol Nutr fied based on whether they were older or younger than 7 years old ( Renal involvement and nephrotic syndrome were more common in patients with severe GI symptoms than in those without GI symptoms, but joint symptoms were less frequently observed in patients with GI symptoms (Table 3). IVIG was only given to patients with severe GI symptoms. The rate of steroid treatment was 95.7% in the severe GI symptoms group, 83.1% in the mild GI symptoms group, and 83.0% in the no GI symptoms group. Scrotal involvement was observed only in patients showing GI symptoms irrespective of symptom grade. There were no significant differences in WBC count, platelet count, and CRP level among the groups. ESR was lower in the group with severe GI symptoms. Comparison based on renal involvement The average age of the renal involvement group was higher than that of the renal non-involvement group (8.44 vs. 6.37 years; p-value<0.001) ( Table 4). GI symptoms were more frequently observed in patients with renal involvement than in those without (87.7% vs. 70.3%; OR, 3.01; p-value=0.009). Other factors such as joint symptoms, purpura, sex, recurrence, and laboratory tests showed no significant association with renal involvement. Radiological examination Abdominal USG was performed in 86 patients, SBS in 13 patients, and abdominal CT in 14 patients. Intussusception was diagnosed by USG in 2 patients (Table 1). Most patients had unremarkable findings, although occasionally focal bowel wall thickening was observed. In 9 of 13 HSP patients, segmental wall thickening was noted on the duodenum or small colon in SBS. In two patients with abdominal pain not associated with skin rash, steroid treatment was started on the impression of HSP after SBS. Abdominal CT showed focal bowel wall thickening in 6 patients. Endoscopy Twenty-two patients were imaged using a gastrofiberscopy and colonoscopy was performed in 3 patients. Twelve patients were found to have erosive gastritis, hemorrhagic gastritis or duodenal ulcers at endoscopy. Endoscopy was performed after diagnosis of HSP in most patients with the exception of two patients who experienced abdominal pain in the Pediatr Gastroenterol Hepatol Nutr absence of a rash. Of the three children examined via colonoscopy, one was diagnosed with inflammatory bowel disease associated with HSP, and the others showed no specific findings. Treatment and outcome The majority (88.7%, n=188) of HSP children were treated with steroids. IVIG was used in 8.5% of cases (n=18) and one child received immunosuppressive therapy. Steroid treated group There was no statistically significant difference in average age, period of initial steroid treatment, presence of joint or GI symptoms, degree of renal involvement, and rate of IVIG treatment between the high and low dose groups ( Table 5). The period of steroid treatment with full dose was 12 days in the high-dose group (steroid dose: 1.5-2.5 mg/kg/day) after excluding 5 patients with nephrotic syndrome requiring long-term treatment, and 9.03 days in the low-dose group (steroid dose: 0.5-1.5 mg/kg/day), and this difference was not statistically significant. The period of steroid treatment with tapering dose was significantly shorter in the high-dose group compared to the low-dose group (8.21 vs. 11.3 days, respectively; p=0.025). However, the period of total steroid treatment showed no difference between the high-dose group and low-dose group (20.37 vs. 20.21 days, respectively). IVIG treated group IVIG was administered to 18 patients who suffered from persistent abdominal pain despite steroid treatment (total dose=2 g/kg for 1 to 5 days). Complete remission of symptoms was observed in 6 patients within 24 hours after the end of the IVIG infusion. However, variable types of responses were noted after IVIG treatment, including partial improvement (n=5), temporary improvement of symptoms and relapse 2-3 days later (n=5), and no improvement (n=2). Five out of seven patients who experienced temporary or no improvement were transferred to other hospitals at their parent's request. In one patient, GI symptoms and purpuric rash persisted for one month despite treatment with steroids and cy-clophosphamide, but completely improved after IVIG treatment. It was difficult to evaluate the long term prognosis of the IVIG treated group due to the high rate of transfer to other hospitals. Of the 18 patients treated with IVIG, only one patient developed nephrotic syndrome and was diagnosed with HSPN by renal biopsy. This patient recovered during the follow-up period. Clinical course and prognosis After exclusion of 8 transferred patients and 10 lost to follow up, 194 patients were followed up for an average of 261.2 days. Follow-up over one month was possible in 135 patients. Four patients with renal involvement were excluded because they did not visit the outpatient clinic after discharge. The average follow up period of the remaining 53 patients with renal involvement was 490.5 days and 46 patients were followed up for over 1 month. In 10 patients with renal involvement showing nephrotic range proteinuria, 9 cases manifested within one month after diagnosis and one case seven weeks after diagnosis. Persistent elevation of creatinine after discharge was observed in 2 patients. One patient showed aggravation of renal function (Cr 1.12, estimated glomerular filtration rate (eGFR) 85.61 mL/min/1.73 m 2 ) 6 years after onset and was transferred to another hospital. In another patient, renal function (Cr 1.48, eGFR 69.55 mL/min/1.73 m 2 ) was aggravated at 1 month after onset, but improved (Cr 1.01, eGFR >90 mL/min/1.73 m 2 , mild proteinuria) 2 years later. No patient progressed to end stage renal disease (ESRD) during the study period. DISCUSSION HSP is the most common type of vasculitis in children. It is a form of leukocytoclastic vasculitis and is characterized by IgA-mediated injury. It has an incidence of 8-22/100,000 and is more prevalent in males [1,2,19,20]. Although HSP could manifest in all age groups, including adults, it is most frequently diagnosed at 5-7 years of age [4,[19][20][21][22][23]. In this study that included children under 18 years of age, the average age at first diagnosis was 6.93 years, and this finding is consistent with previous studies. The frequency of diagnosis of HSP in this study was slightly decreased during 2008-2012 and subsequently increased after 2013, but this was not significant statistically. The peak incidence of HSP was in the winter (33.0%) and lowest in the summer (11.8%), which is in keeping with findings from previous studies [20,23,24]. Rostoker [2] reported that the incidence of skin rash, joint and GI symptoms and renal involvement in one group of patients was 100%, 70%, 66%, and 37%, respectively. In Northern Spain [6], these values were 100%, 63.1%, 64.5%, and 41.2%, respectively, and 100%, 45.3%, 34.4%, and 44.7%, respectively, in Turkey [25]. In this study, the incidence of skin rash, joint and GI symptoms and renal involvement was 98.6%, 69.8%, 75.0%, and 26.9%, respectively, reflecting a higher incidence of GI symptoms but lower incidence of renal involvement compared to those of other countries. In other Korean studies, Choi and Lee [23] reported that skin rash, joint and GI symptoms and renal involvement were 100%, 40.8%, 53.8%, and 18.9%, respectively. In the study by Kang et al. [26], these values were 100%, 55.4%, 56.3%, and 30.4%, respectively, and 93%, 52.4%, 71.4%, and 10.8%, respectively in the study by Hong and Yang [27]. In the present study, the incidence of GI and joint symptoms was relatively high compared to in other Korean studies, which might be due to regional factors and the fact that our study population was recruited from hospitalized patients. The rate of renal involvement was relatively low in Korea compared to that of other countries. In a Japanese study, Nagamori et al. [28] reported that the renal involvement rate was 15%, which is similar with that of Korea. However, this number was quoted to be as high as 49% in another Japanese study and 31.9% in a recent Chinese study [24,29]. These results suggest that the rate of renal involvement results from an association of multiple factors including ethnic, environmental, and access to medical services. In the present study, joint symptoms were more Pediatr Gastroenterol Hepatol Nutr frequently observed in younger children. This findings is consistent with the results of a recent Korean study and other studies comparing symptoms of HSP children with those of adults [6,7,9]. In HSP children without abdominal pain, joint pain was a major cause of hospitalization. In this study, clinical manifestations were assessed by diving patients into two groups based on a reference age of 7 years. In previous studies that compared the clinical manifestations of HSP children with those of HSP adults, the rate of renal involvement was significantly higher in adults [6,7,9,26]. In children, renal involvement and association with nephrotic syndrome was frequently observed in those over 10 years of age [8,9]. Zhao et al. [24] proposed that age greater than 7 is a risk factor for renal involvement by multiple regression analysis. Rostoker [2] suggested that age greater than 5 is a risk factor for renal involvement, while other groups proposed that severe abdominal pain and age over 4 years are risk factors for renal involvement [29]. A recent Korean study stated that age of onset is a poor prognostic factor in HSPN patients [30]. It should be remembered that age of onset of HSP is the most important factor in evaluating prognosis, and thus it is essential to assess whether or not renal involvement is present in older children. In the present study, we found that only GI symptoms (p-value=0.033; OR, 2.701) and age (p-value<0.001) were significant factors associated with renal involvement, the latter of which increased the OR for renal involvement by 1.195 (1.083-1.318) for every 1 year increase in age. In this study, 7 years old (average age of total HSP patients: 6.93 years) was regarded as a reference age for grouping. We confirmed that the rate of renal involvement was significantly higher in older children, and occurred at the same rate in children aged between 6 years and 14 years. These findings suggest that the likelihood of renal involvement increases with age in children with HSP. It was initially thought that renal involvement was not usually present at the time of diagnosis. However, several researchers have recently reported that renal involvement is evident in 75-90% of chil-dren with HSP within 1 month of diagnosis [1,2,20,31]. These findings were similar to our own, which showed that 90% of cases of renal involvement were manifest within 1 month of HSP onset. In this study, there was an association between GI symptoms and scrotal involvement, but no significant association was observed between scrotal involvement and renal involvement. Wang et al. [32] reported an association between scrotal involvement, GI symptoms, raised D-dimer, and HSPN. Another recent study in Korea also showed that scrotal involvement was found only in patients with abdominal symptoms [33]. In this study, blood results on admission did not differ among groups. In patients with severe GI symptoms, the ESR was low, but this finding could not be explained rationally. In a previous study, Nagamori et al. [28] suggested that a scoring system comprising 6 laboratory tests (WBC, neutrophil count, albumin, D-dimer, coagulation factor XIII, and sodium) could be used for prognostic purposes. Recently, Hong and Yang [27] suggested a possible association between the grade of acute GI involvement, D-dimer, and fibrin degradation product. Wang et al. [32] reported that the occurrence of HSPN might be increased in patients with joint symptoms who showed an increased D-dimer. More studies are needed to investigate the use of laboratory values as markers of HSP severity and for prognostic purposes. Characteristic purpura spots are observed in most patients with HSP, but in those cases where skin manifestations are delayed after GI symptoms or absent altogether, accurately diagnosing and thus treating HSP can be difficult [11,33]. In this study, five patients complained of abdominal pain in the absence of purpura at admission (4 patients: no purpura after discharge, 1 patient: purpura 27 days after symptom onset). Focal thickening of the bowel wall in the duodenum and small bowel was observed in two patients by SBS and in one patient by abdominal USG. In one patient, treatment for HSP was started after finding a focal mucosal thickening and hematoma in the duodenum by gastrofiberscopy. These re-sults suggest that non-invasive abdominal USG may be useful as a first line investigation to differentiate HSP from other GI diseases in patients with severe abdominal pain, and that SBS or gastrofiberscopy may be used selectively in patients who do not respond well to conservative treatment [11,21,[34][35][36]. Steroid treatment should be considered in HSP patients with severe abdominal pain where possible differential diagnoses include intussusception or GI hemorrhage, and in cases of severe joint pain resulting in immobilization. Masarweh et al. [22] proposed the following HSP admission criteria: 1) scrotal pain and tenderness, 2) moderate abdominal pain, 3) GI hemorrhage, 4) proteinuria, 5) mobilization difficulty, and 6) multiple number of involved joints (>2). In this study, a total of 208 patients out of 212 were hospitalized due to severe GI and joint symptoms. Early steroid treatment could shorten the duration of extra-renal symptoms like severe abdominal pain [3][4][5], and lower the incidence of GI complications [21]. However, steroid treatment in HSP is not yet standardized, which explains the variable range of steroid dosage and treatment periods used in previous studies. Huber et al. [3] and Dudley et al. [37] reported that prednisolone treatment (2 mg/kg/day) for 1 week with subsequent dose reduction for 1 week did not show any significant difference in the incidence of renal and GI complications. In contrast, Ronkainen et al. [4] and Jauhola et al. [20] reported that prednisolone treatment (1 mg/kg/day) for 2 weeks with subsequent dose reduction for 2 weeks did improve renal and extra-renal symptoms in the short-term compared to placebo, but this finding did not significantly affect the patient's subsequent clinical course. In a recent Korean study, dexamethasone was given to HSP patients with severe abdominal pain until symptoms improved, and this was followed by a 2 week course of prednisolone (1 mg/kg/day) [38]. No difference in the rate of recurrence rate and persistence of nephritis was observed between the dexamethasone treated and untreated groups [38]. In another study, there was also no significant difference in treatment results after 4 weeks of follow-up between the hy-drocortisone-treated and high dose methylprednisolone-treated groups [39]. According to most previous studies, steroid treatment does not affect the long-term prognosis of HSP [3,20,37,40]. This finding is true even for different types and dosages of steroids [26,38]. In this study, two groups classified according to the initial dose of steroid were compared, and we observed no significant difference in the rate of recurrence of HSP, renal involvement, and progression to nephrotic syndrome. However, this study is limited by the non-standardization of steroid treatment criteria, steroid type, and steroid dosage, as the results were based on a retrospective study performed by multiple clinicians over a long-term period. In this study, ESRD cases were not observed. However, relative underestimation of long-term prognosis in this study should be considered, because some patients transferred to other tertiary hospitals due to persistent elevation of creatinine or refractory severe GI symptoms, and thus could not be included in assessing the prognosis. At present, there is no retrospective study that has investigated the long-term prognosis of HSP in Korea. There is an ongoing need for a prospective large-scale study to establish a standardized treatment protocol for HSP and accurately assess prognosis. In children with HSP, the incidence of renal involvement and nephrotic syndrome is higher in those presenting at an older age, and in those patients with severe abdominal pain or GI hemorrhage. These patients should be carefully monitored to detect progression to HSPN. In children with severe abdominal pain, hematemesis, or hematochezia, performing a SBS or gastrofiberscopy could be helpful to differentiate cases of HSP without characteristic purpuric spots. In the future, more prospective and large-scaled studies are needed to set a definite treatment protocol based on long-term prognosis and clinical management.
2018-04-03T02:32:17.509Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "174a3255ade3119590ac57ddc85e664046f9b62b", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc5061659?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "174a3255ade3119590ac57ddc85e664046f9b62b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
84846828
pes2o/s2orc
v3-fos-license
Translation regulation in the spinal dorsal horn – A key mechanism for development of chronic pain Highlights • Spinal sensitization shares molecular mechanisms with hippocampal LTP and memory.• Changes in mRNA translation are observed in many chronic pain conditions.• Targeting translational control mechanisms is a promising strategy to inhibit pain.• Targeting spinal reconsolidation can reverse established hypersensitivity. Introduction Peripheral injury causes acute pain, which is essential for an organism's survival by ensuring quick withdrawal from harmful or potentially harmful stimuli. Under most circumstances, pain resolves shortly after damaged tissue heals. However, in some cases, the pain does not subside and persists after full tissue recovery. This type of pain, called chronic pain, does not serve any protective function and is likely driven by pathological changes that can arise in different components of the pain pathway. Long-lasting sensitization of primary sensory neurons and spinal nociceptive circuits, and plastic changes in brain regions, have all been associated with enhanced transmission and sensation of pain. In this review, we will focus on the spinal cord dorsal horn, which integrates inputs from peripheral and descending pathways to generate an output that is transmitted up to the brain. First, we will briefly describe the mechanisms underlying the sensitization of spinal pain circuits, and then present evidence for the role of translational control in the regulation of these processes. Mechanisms underlying sensitization of spinal nociceptive circuits In chronic pain conditions, repeated or intense noxious stimuli lead to maladaptive plastic changes along the pain pathway, including a sensitization of spinal nociceptive circuits, a phenomenon known as central sensitization (Woolf, 2011). Central sensitization is considered to be a key mechanism underlying the development of persistent hypersensitivity states (Latremoliere and Woolf, 2009). Alterations in several cellular processes can contribute to central sensitization, including enhanced postsynaptic response of spinal neurons to neurotransmitter release from primary afferents (Ikeda et al., 2003(Ikeda et al., , 2004, reduced inhibitory tone as a result of decreased excitability of spinal inhibitory interneurons (Guo and Hu, 2014;Torsney and MacDermott, 2006), and inefficient GABAergic and glycinergic neurotransmission (Coull et al., 2003), as well as modulation of descending pathways (Ossipov et al., 2014). An imbalance of excitatory versus inhibitory activity in central sensitization leads to enhanced excitability of spinal nociceptive circuitry, which causes an amplification of the peripheral signal. Central sensitization results in a reduced pain threshold (allodynia), an increase in the perceptual response to noxious stimuli (hyperalgesia), and a recruitment of peripheral inputs from non-injured areas, causing an expansion of the receptive field (secondary hyperalgesia). Translational control of neuronal plasticity Long-lasting modulation of intrinsic excitability and synaptic functions relies on new gene expression. Gene expression can be modulated at different steps: transcription, mRNA translation, mRNA and protein stability, and post-translational modifications of protein. Translational control allows for the modulation of the cellular proteome by regulating the efficiency by which mRNA is translated into proteins. It provides neurons with a mechanism to quickly and locally respond to intracellular stimuli and extracellular cues by modifying their cellular or synaptic proteome. Translational control mechanisms mRNA translation can be divided into three stages: initiation, elongation and termination. Initiation is the rate limiting step for translation and therefore is tightly regulated by several mechanisms (Sonenberg and Hinnebusch, 2009). At their 5′ end, all nuclear transcribed eukaryotic mRNAs contain a structure called 7 methylguanosine triphosphate (m 7 Gppp), termed the "cap". This structure facilitates ribosome recruitment to the mRNA (Fig. 1). The 3′ end of the mRNA contains a poly(A) tail that protects mRNA from degradation, and binds poly(A)-binding protein (PABP). The mechanisms regulating translation initiation can be divided into two major categories: (1) regulation of the recruitment of the ribosome to the cap at the 5′ end of mRNA (via phosphorylation of translation initiation factors such as 4E-BPs, eIF4E and eIF2a), and (2) regulation of translation at the 3′ end of mRNA via controlling the length of the poly(A) tail (e.g. by CPEB). Ribosome recruitment requires a group of translation initiation factors, termed eIF4 (eukaryotic initiation factor 4). A critical member of this group is eIF4F, which is a three-subunit complex (Edery et al., 1983;Grifo et al., 1983) composed of (1) eIF4A (an RNA helicase), (2) eIF4E, which specifically interacts with the cap structure (Sonenberg et al., 1979) and (3) eIF4G, a large scaffolding protein that binds to both eIF4E and eIF4A. eIF4G serves as a modular scaffold that assembles the protein machinery to direct the ribosome to the mRNA (Fig. 1). eIF4E generally exhibits the lowest level of expression of all eukaryotic initiation factors. It plays a central role in cap-recognition, and due to its low levels of expression, it is considered the rate-limiting step for translation, and a major target for regulation. The assembly of eIF4F is promoted by the mechanistic target of rapamycin complex 1 (mTORC1), which phosphorylates and thereby inactivates translational repressors, the eIF4E-binding proteins (4E-BP1, 4E-BP2 and 4E-BP3). 4E-BPs repress the formation of the eIF4F complex by competing with eIF4G for a common binding site on eIF4E. Upon phosphorylation by mTORC1, 4E-BP binding to eIF4E is reduced, allowing eIF4F complex formation and initiation of translation. mTORC1 also phosphorylates its second major downstream effectors, p70 S6 kinases (S6K1/2), which regulate translation initiation (via eIF4B), translation elongation (via eEF2K) and ribosome biogenesis (via ribosomal protein S6). eIF4E activity is also regulated via phosphorylation at serine 209 by MNK1/2 (mitogen-activated protein kinase (MAPK) interacting protein kinases 1/2) downstream of ERK (extracellular-signal-regulated kinase) (Fig. 1). This phosphorylation event is associated with increased rates of translation initiation Scheper et al., 2002), although the exact underlying molecular mechanism remains unknown. A second major translational control mechanism is mediated by the translation initiation factor, eIF2 (composed of three subunits) (Sonenberg and Hinnebusch, 2009), via phosphorylation of its α subunit ( Fig. 1). Translation initiation requires the formation of a ternary complex composed of the initiator (Met-tRNA i Met ) and the GTP-bound eIF2. At the end of each round of ribosome recruitment, there is a recycling of inactive GDP-bound eIF2ɑ to active GTP-bound eIF2 by the guanine nucleotide exchange factor (GEF), eIF2B (Pavitt et al., 1998). Phosphorylation of eIF2ɑ at serine 51 inhibits the activity of eIF2B, reducing ternary complex formation and thereby inhibiting protein synthesis. Paradoxically, eIF2ɑ phosphorylation stimulates translation of mRNAs containing upstream open reading frames (uORFs) in their 5′ UTRs, such as ATF4 and CHOP. eIF2ɑ is phosphorylated in response to different cellular stress conditions via activation of eIF2ɑ kinases (PERK, PKR, GCN2 and HRI) (Trinh and Klann, 2013). Phosphorylation of eIF2ɑ is largely involved in the regulation of general translation, whereas eIF4E-dependent translational control regulates the translation of a distinct subset of mRNAs, many of which are involved in proliferation, growth and synaptic plasticity. Translation is also regulated via 3′ end-mediated mechanisms. Translation of mRNAs containing the cytoplasmic polyadenylation elements (CPE) at their 3′ UTR is regulated by the cytoplasmic polyadenylation element-binding protein (CPEB) (Richter and Klann, 2009). CPEB binds CPE and stimulates the prolongation of the poly(A) tail by regulating the polyadenylation apparatus composed of poly(A) polymerase Gld2, deadenylase PARN, and translational factor neuroguidin (Ngd) (Ivshina et al., 2014;Udagawa et al., 2012). Elongation of the mRNA poly(A) tail leads to stabilization of the mRNA and enhanced binding of the poly(A)-binding protein (PABP), which facilitates translation initiation by simultaneously binding to both the poly(A) tail and eIF4G, resulting in mRNA circularization (Gray et al., 2000;Kahvejian et al., 2001). This mechanism has been shown to regulate the translation of CamkIIα and Nr2a mRNAs (Huang et al., 2002;Wu et al., 1998). Synaptic plasticity Synaptic plasticity refers to the ability of the synapse to strengthen or weaken in response to experience or stimuli. The predominant cellular model for synaptic plasticity is long-term potentiation (LTP), which is thought to underlie learning and memory (Morris, 2003). Coactivation of pre-and post-synaptic compartments triggers calcium influx into neurons, stimulating several signaling pathways to promote transcription and translation of plasticity-related genes. The newly synthesized mRNAs are either subsequently translated in the cell body or transported to synapses where they are locally translated (Jung et al., 2014;Tom Dieck et al., 2014). The local protein synthesis model is consistent with the presence of translation machinery (ribosomes and translation factors) and mRNAs in, or close to dendritic spines (Steward and Fass, 1983;Steward and Levy, 1982). Moreover, LTP-inducing stimulation causes ribosomes to move from dendritic shafts to spines with enlarged synapses (Ostroff et al., 2002). Protein synthesis in dendrites occurs in response to various forms of stimulation (Kang and Schuman, 1996;Scheetz et al., 2000) and is essential for long-term plasticity (Huber et al., 2000;Kang and Schuman, 1996). Accordingly, studies in the hippocampus, amygdala and cortex have demonstrated a key role of translational control in the protein synthesis-dependent late phase of long-term potentiation (L-LTP), long-term depression (LTD) and learning and memory (Costa-Mattioli et al., 2009). Inhibition of translation with anisomycin or inhibitors of mTORC1 impairs L-LTP and long-term memory (LTM) (Cammalleri et al., 2003;Tang et al., 2002). Neuronal activity and behavioural training lead to a reduction in eIF2ɑ phosphorylation, resulting in suppression of LTD and stimulation of L-LTP and long-term memory (Costa-Mattioli et al., 2005;Costa-Mattioli et al., 2007;Costa-Mattioli and Sonenberg, 2006;Di Prisco et al., 2014). Regulation of translation via CPEB and PABP has been also shown to control L-LTP and LTM (Alarcon et al., 2004;Khoutorsky et al., 2013;Richter, 2007;Udagawa et al., 2012). Most of the current knowledge on the role of translational control in neuroplasticity has been derived from experiments in the hippocampus, however recent studies show that similar mechanisms regulate activitydependent long-term modification of synaptic strength in other brain areas including cortex, amygdala, and spinal cord (Belelovsky et al., 2005;Buffington et al., 2014;Khoutorsky et al., 2015;Khoutorsky and Price, 2017;Melemedjian and Khoutorsky, 2015;Parsons et al., 2006). Translational control in spinal plasticity Studies of spinal LTP and central sensitization have demonstrated a significant overlap with underlying mechanisms known in hippocampal LTP and memory formation (Ji et al., 2003;Price and Inyang, 2015). LTP of extracellular field potentials in the superficial dorsal horn of the spinal cord can be induced by electrical stimulation of afferent C fibers (Liu and Sandkuhler, 1995), noxious stimulation of peripheral tissue, and nerve damage (Sandkuhler and Liu, 1998;Zhang et al., 2004). Stimulation of the sciatic nerve with the LTP-inducing protocol produced long-lasting allodynia and thermal hyperalgesia (Ying et al., 2006;Zhang et al., 2005), suggesting that spinal LTP might be a cellular model of injury-induced hyperalgesia (Sandkuhler, 2007). A unique Fig. 1. Translational control mechanisms. Signaling pathways upstream of translation can be stimulated by activation of several membrane receptors. The activation of these receptors leads to subsequent stimulation of (A) RAS/RAF/ERK pathway and the phosphorylation of eIF4E, and (B) the activation of PI3K/AKT/mTORC1 pathway. mTORC1 phosphorylates and inhibits the translational repressor 4E-BP, resulting in increased eIF4F complex formation, which promotes the recruitment of the ribosome to the cap structure at the 5′ end of the mRNA. This mechanism controls translation of a specific subset of mRNAs. (C) Translation is also regulated via eIF2ɑ pathway, which controls both general translation and translation of mRNAs containing uORFs at their 5′ UTR (e.g. ATF4 and CHOP). feature of spinal LTP is that it exhibits activity-dependent potentiation of both activated synapses, causing homosynaptic potentiation, as well as non-activated synapses, leading to heterosynaptic potentiation (Kronschlager et al., 2016;Latremoliere and Woolf, 2009). Heterosynaptic potentiation, which is not present in the cortex or hippocampus, is the major form of synaptic plasticity in the spinal cord. Heterosynaptic LTP is a key mechanism for the development of distinct forms of activity-dependent central sensitization manifested by a response to low threshold afferents (allodynia) and spread of pain sensitivity to non-injured areas (secondary hyperalgesia) (Latremoliere and Woolf, 2009). Inhibition of protein synthesis with either cyclohexamide or anisomycin blocked the late-phase of spinal LTP elicited by C-fiber stimulation but did not affect the induction (early) phase (Hu et al., 2003). Thus, similar to hippocampal LTP, spinal LTP exhibits two distinct phases, an early phase that is protein synthesis independent, and a latephase that is protein synthesis-dependent (Bliss and Collingridge, 1993). Moreover, Eif4ebp1 −/− mice lacking the translational repressor 4E-BP1 show a reduced threshold for the induction of spinal LTP as well as an increased extent of potentiation . These results indicate that spinal LTP exhibits bidirectional dependence on protein synthesis, and suggest that stimulation of mRNA translation in spinal neurons might facilitate the sensitization of spinal nociceptive circuitry and accompanied hypersensitivity in chronic pain conditions. Evidence for a central role of translational control in chronic pain conditions Numerous studies have documented increased activity in signaling pathways upstream of mRNA translation in spinal neurons following acute noxious peripheral stimulation and also in chronic pain conditions. Intraplantar capsaicin (Geranton et al., 2009) (Xu et al., 2011). mTORC1 signaling also increases in the dorsal horn of the spinal cord in models of chronic pain including chronic inflammation-induced pain caused by complete Freund's adjuvant (CFA) (Liang et al., 2013), bone cancerinduced pain (Shih et al., 2012) and nerve injury . Consistent with the activation of mTORC1, the signaling of upstream kinases such as PI3K and AKT is also upregulated in these conditions in the dorsal horn of the spinal cord (Pezet et al., 2008;Xu et al., 2011). The functional role of the stimulation of protein synthesis in spinal neurons following peripheral injury has been extensively studied using various pharmacological approaches. Subcutaneous injection of formalin elicits a biphasic pain response. The early phase pain behaviour (0-10 min) is mediated by activation of nociceptors, whereas the second phase (10-50 min) is thought to result from sensitization of spinal pain circuits. Intrathecal administration of the protein synthesis inhibitor anisomycin, or mTORC1 inhibitor rapamycin, profoundly reduces nocifensive behaviour in the second phase of the formalin test but not the first phase (Asante et al., 2009;Kim et al., 1998;Price et al., 2007;Xu et al., 2011). Consistent with the behavioural effects, formalin-induced hyperexcitability in wide dynamic range dorsal horn spinal neurons is inhibited by rapamycin (Asante et al., 2009). Additionally, intrathecal rapamycin alleviates capsaicin-induced secondary mechanical hyperalgesia, which is caused by sensitization of the spinal cord neurons to the input from capsaicin-insensitive Aδ nociceptors (Geranton et al., 2009). Inhibition of mTORC1 also efficiently alleviates hypersensitivity in chronic models of pain including chronic inflammation-induced pain (Liang et al., 2013;Norsted Gregory et al., 2010), bone cancer-induced pain (Shih et al., 2012) and neuropathic pain (Asante et al., 2010;Cui et al., 2014;Zhang et al., 2013). Pharmacological evidence for the central role of protein synthesis and its master regulator mTORC1 in the spinal cord in the regulation of hypersensitivity is supported by genetic manipulations of different components of the mTORC1 pathway. For example, mechanical hypersensitivity can be caused by activation of the mTORC1 pathway via spinal deletion of TSC2 (Xu et al., 2014), an upstream repressor of mTORC1, or by spinal ablation of 4E-BP1, a repressor of eIF4F complex formation and cap-dependent translation . All together, these studies indicate that mTORC1 activity and protein synthesis are upregulated in the dorsal horn of the spinal cord in multiple acute and chronic pain conditions, and their inhibition efficiently alleviates nociceptive behaviour and pain hypersensitivity. Another important phenomena in which translational control in the spinal cord plays a central role is "hyperalgesic priming" (Reichling and Levine, 2009). Peripheral tissue injury, causing a transient hypersensitivity, leads to persistent sensitization or "priming" of the nociceptive pathway to subsequent insults (Reichling and Levine, 2009). This form of plasticity persists for many weeks and models a clinical situation of increased risk to develop chronic pain in patients with recurrent tissue injuries. The induction of hyperalgesic priming is mediated via brain-derived neurotrophic factor (BDNF)-dependent activation of the mTORC1 and eIF4F complex formation in the spinal cord, which stimulate the synthesis of PKCλ and PKMζ (Asiedu et al., 2011;Melemedjian et al., 2013). Interestingly, spinal LTP is enhanced in primed animals (Chen et al., 2018), supporting the role of synaptic plasticity in this process. Notably, PKCλ and PKMζ play key roles in the expression and maintenance of hippocampal LTP and memory storage, further demonstrating the similarity between molecular mechanisms underlying persistent pain and memory. Translational control in opioid-induced tolerance and hyperalgesia Sensitization of spinal circuits can be caused not only by peripheral tissue damage and subsequent activation of C fibers, but also by aberrant spinal plasticity in response to drugs. Opioid-induced tolerance and hyperalgesia are two examples of such plasticity, which is commonly observed in both animal models and human patients (Sjogren et al., 1993). Opioid-induced hyperalgesia is caused by chronic opioid administration, which can paradoxically lead to central sensitization and pain (Kim et al., 2014;Lee et al., 2013). Although the etiology of opioid-induced hyperalgesia is poorly understood, there are several proposed mechanisms, including the activation of NMDA receptors and protein kinase C (PKC), upregulation of spinal dynorphins, and stimulation of descending facilitatory pathways (Lee et al., 2011). Opioidinduced tolerance occurs during long-term opioid treatment, requiring escalating doses of opioids to obtain the consistent levels of analgesic effect (Chu et al., 2006). The mechanisms underlying opioid-induced tolerance involve opioid receptor desensitization and down-regulation (Allouche et al., 2014;Williams et al., 2013). Repeated intrathecal administration of morphine is sufficient to cause tolerance and hyperalgesia, suggesting that spinal cord plasticity plays a central role in these phenomena. Interestingly, a selective μopioid agonist, DAMGO, stimulates the AKT/mTORC1 axis and its downstream effectors 4E-BP1 and p70 S6 in non-neuronal cell lines stably expressing the μ-opioid receptor (Polakiewicz et al., 1998). This in vitro finding was confirmed in an in vivo mouse study showing that repeated intrathecal morphine injections strongly induce mTORC1 signaling, and increase eIF4F complex formation and mRNA translation via activation of the μ-opioid receptor (Xu et al., 2014). Remarkably, inhibition of mTORC1 with rapamycin not only alleviated the development of the morphine-induced tolerance and hyperalgesia, but also reversed the fully established tolerance and hyperalgesia after 6 days of daily morphine administration. The mechanisms by which mTORC1 inhibition decrease the opioid-induced tolerance and hyperalgesia remain unknown. It is tempting to speculate that morphine-induced maladaptive spinal plasticity requires mTORC1 and protein synthesis for its induction and maintenance. Consistent with this hypothesis, mTORC1 inhibition attenuated the upregulation of dorsal horn PKCγ, neuronal nitric oxide synthase (nNOS), and CamKIIα, three key molecules involved in spinal plasticity as well as morphine-induced tolerance and hyperalgesia (Xu et al., 2014). A recent study suggested that opioid-induced tolerance and hyperalgesia require the activity of μopioid receptors in nociceptors (Corder et al., 2017). Intrathecal injections are known to target both dorsal root ganglia (DRG) and the spinal cord. Since intrathecal administration of mTORC1 inhibitors can block mTORC1 activity in the DRG, an alternative approach should be used with a selective inhibitor of the mTORC1 in the spinal cord but not in the DRG, for example by spinal intra-parenchymal viral injection to downregulate mTORC1. New approaches to reverse the established sensitization by targeting spinal reconsolidation New gene expression is required for the induction phase of spinal sensitization, but not for its maintenance. As soon as sensitization is established, it is no longer sensitive to the inhibition of protein synthesis (Asiedu et al., 2011;Melemedjian et al., 2013). Likewise, memory formation is sensitive to protein synthesis inhibitors at the acquisition stage, but once the memories are formed, they are consolidated into a stable and protein synthesis-independent trace. The consolidated memories can be retrieved by exposure to a conditional stimulus, rendering it to a labile state that requires protein synthesis and mTORC1 activity for further reconsolidation (Lee et al., 2017;Nader et al., 2000). The fragile nature of the memory trace after the retrieval provides an opportunity to erase it by pharmacological targeting of protein synthesis and mTORC1. The central role of mTORC1 in reconsolidation has been demonstrated for memories associated with electrical foot shock and addictive substances (Barak et al., 2013;Blundell et al., 2008;Stoica et al., 2011), raising the possibility that inhibition of mTORC1 is a potential approach to erase the memory of an adverse event, such as in posttraumatic stress disorder (PTSD). The phenomenon of reconsolidation has been also demonstrated in the spinal cord (Bonin and De Koninck, 2014;Bonin and De Koninck, 2015). Intraplantar capsaicin-induced sensitization was insensitive to protein synthesis inhibition when it was fully established, but became anisomycin-sensitive following reactivation of spinal pain pathways with the second capsaicin administration. Transformation of the established capsaicin-induced sensitization into a labile state requires the activation of secondorder spinal neurons and the activity of CaMKIIa and ERK, two molecules involved in spinal synaptic plasticity. The central role of synaptic plasticity in the reconsolidation phenomenon is further supported by LTP experiments (Bonin and De Koninck, 2014). The fully established spinal LTP could be reversed when the second tetanic stimulation was delivered in the presence of anisomycin. All together, these results show that the established hyperalgesia and spinal LTP could be rendered labile by reactivation of pain circuits, further demonstrating an intimate link between persistent pain and the LTP of spinal nociceptive circuits (Ji et al., 2003). Spinal pain reconsolidation-like effects have also been demonstrated in a model of hyperalgesic priming. The activation of dopamine D 1 /D 5 receptors coupled with anisomycin reversed persistent sensitization in primed animals (Kim et al., 2015). The results of pain reconsolidation studies provide a potential novel therapeutic avenue to abolish established sensitization of nociceptive circuits in the spinal cord in chronic pain states. Reactivating the pain memory trace and transforming it into the labile state might allow for erasure of persistent pain via blocking reconsolidation using proteinsynthesis inhibitors. Notably, the re-opening of the reconsolidation window by spinal application of AMPA and NMDA could be used for a variety of chronic pain states, where the sensitisation-inducing stimulus is unknown or may no longer be relevant. Conclusions Maladaptive plasticity in the spinal cord is a key mechanism for sensitization of pain circuits and the subsequent development of pain. A central role of translational control in the regulation of synaptic and intrinsic neuronal plasticity in the spinal cord provides an opportunity to target translation control mechanisms to reverse the sensitization state. Recently discovered approaches to open the reconsolidation window by peripheral or central reactivation of nociceptive circuits or activation of dopaminergic pathways provide a promising therapeutic avenue. To fully understand the role and mechanisms of action of translation control in pathological pain states, it is essential to identify the subsets of differentially translated mRNAs in different cell types, pain states and phases of spinal cord sensitization. This information would provide an invaluable resource for better understanding the molecular mechanisms underlying the sensitization of spinal pain circuits and chronification of pain.
2019-03-26T01:42:54.333Z
2018-04-03T00:00:00.000
{ "year": 2018, "sha1": "7129201c8550cc4a46bb3c216d60ed79aabe3981", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ynpai.2018.03.003", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a7a49dafeb948ac8b9d73d2a1f2f4a761a0ce12f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233805682
pes2o/s2orc
v3-fos-license
A new species of Amphisbaena (Squamata: Amphis baenidae) from the Orinoquian region of Colombia In northern South America, amphisbaenians are rarely seen among the herpetofauna.Thus, general knowledge about them is very poor. During a herpetological survey in 2012 at Casanare, Colombia, we found two specimens of an unusual Amphisbaena. A third specimen sharing the same morphotype labeled Amphisbaena sp. from Vichada department was found deposided in an Colombian reptile collection. Based on morphological analyses together with phylogenetic analyses of 1029 base pairs of the mitochondrial DNA (mtDNA), we describe a new species of Amphisbaena that inhabits in the Orinoquian region of Colombia. The new species is part of a phylogenetic clade together with A. mertensii and A. cunhai (central-southern Brazil), exhibiting a great genetic distance (26.1–28.9%) between the newly identified lineage versus those taxa, and versus the sympatric taxa A. alba and A. fuliginosa. Morphologically, this new Amphisbaena can be distinguished from their congeners by characters combination of number of preocloacal pores, absence of malar scale, postgenial scales and body and caudal annuli counts. Amphisbaena gracilis is on morphology grounds the most similar species. However, the new species can be distinguished from it by having higher body annuli counts, angulus ories aliegned with the edges of the ocular scales and center of frontal scales, less number of large middorsal segments of the first and second body annulus, and rostral scale visible from above. The description of this new Amphisbaena species points out the urgent need to increase the knowledge of worm lizards in Colombia Introduction Amphisbaenians are one of the most enigmatic and unusual squamates. All species have burrowing habits, but some occasionally venture onto the surface or can be found under objects on the ground (Pough et al. 1998). Thus, due to its fossorial habit, cryptic behavior, secretive microhabitats and lower encounter rate, amphis-baenians are considered an elusive research objective. About 102 species of the genus Amphisbaena Linnaeus, 1758 have been described in South America (Gans 2005;Uetz et al. 2020), with Brazil being the country with the highest diversity with over 80 species (Gans 2005;Gomes and Maciel 2012;Teixeira et al. 2014;Uetz et al. 2020). Colombia is considered to be a megadiverse country in part due to its rich fauna of around 621 species of reptiles (Uetz et al. 2020). However, worm lizards remain poorly represented in the Colombian herpetofauna due to the lack of scientific knowledge. Currently, Colombian worm lizards comprise two genera (Mesobaena Mertens, 1925 andAmphisbaena Linnaeus, 1758) and five species: Mesobaena huebneri Mertens, 1925;Amphisbaena alba, A. fuliginosa, A. medemi and A. spurrelli. Amphisbaena alba and A. fuliginosa (sensu Vanzolini 2002) are the most widely distributed amphisbeanids in the country; A. alba is restricted to the Cis-Andean region while A. fuliginosa is present in both Cis and Trans-Andean regions, ranging from the sea level to 1300 m. a.s.l. Amphisbaena spurrelli was the first amphisbeanid described in Colombia. It is distributed across the Chocoan region to Panáma and its type locality corresponds to corregimiento of Andagoyá, Municipality of San Juan, department of Chocó (Boulenger 1915;Gans and Mathers 1977). Mesobaena huebneri, the second worm lizard species described, is only known from three disjunct and distant localities: Its type locality corresponds to department of Inirida (Amazonian basin, specific locality unknown); the Timbá community, municipality of Mitu, deparment of Vaupes; and Serranía de la Macarena, department of Meta [specific locality unknown (Gans 1971;Cole and Gans 1987)]. Finally, Amphisbaena medemi was erected by Gans and Mathers 33 years ago and is the most recently described worm lizard. This species is distributed across the Caribbean region of Colombia, having as type locality the old Inderena fishing facility at Ciénaga de Amajehuevo, municipality of San Cristobal, Atlántico. After the early efforts made by Gans and collaborators during the 20 th century, few attempts have been made to carry out a comprehensive taxonomic assessment of the Amphisbaena species distributed in Colombia, as well as in northern South America (Señaris 1999;Costa et al. 2018a). The most recent studies in Colombia have only provided a check list of the already known Amphisbaena species or distributional records obtained from field-work, ignoring the specimens housed in museums that are waiting for a detailed revision (Rangel-Ch et al. 2012;Angarita-Sierra et al. 2013;Aponte-Gutiérrez et al. 2019;Carvajal-Cogollo 2019). During a herpetological inventory in the department of Casanare, Colombia (Pedroza-Banda et al. 2014), we found two specimens of an unusual Amphisbaena from the municipalities of Paz de Ariporo and Orocué. A third specimen sharing the same morphotype seen in the Amphisbaena specimens from Casanare was found in the reptile collection of the Pontificia Universidad Javeriana, labeled as Amphisbaena sp., from the municipality of Puerto Carreño, department of Vichada. These three specimens shared unique similarities between them and did not match previous descriptions of any recognized species of the genus (Gonzalez-Sponga and Gans 1971; Gans and Mathers 1977;Gans 2005). Hence, it has become clear that these specimens represent an undescribed evolutionary lineage of amphisbaenians. Therefore, the goal of this paper is to recognize this new species and describe it by integrating molecular and morphological analyses. Ethics statement Fieldwork was performed under the scientific research permit for collection of wild specimens of biological diversity for non-commercial purpose issued by COR-PORINOQUIA (Research Auto: 500.5712.0380) and the Colombian Ministry of Environment and Sustainable development (MADS) by agreement 083 of 2012. This study was conducted following the Colombian animal welfare law and the collection of wild specimens of the biological diversity acts (Ley 1774(Ley , 2016Decreto 1376Decreto , 2013, as well as considering the Universal Declaration on Animal Welfare (UDAW) endorsed by Colombia in 2007. Fieldwork and sampling Fieldwork was carried out in August 2012 in the municipalities of Paz de Ariporo and Orocué, department of Casanare, Colombia. Searches for amphisbaenians were conducted by three researchers from 8:00 to 11:30 and 14:00 to 17:00 for 15 days, with a sampling effort of 97.5 man/hours. We removed covered objects and leaf litter, digging up the ground from 5 to15 cm deep, during three to five minutes for each event. Particularly, we included piles of palm leaves of moriche palm (Mauritia flexuosa L.f., 1782), as part of the microhabitats sampled. Individuals collected were immediately placed into cloth bags for later general procedures of measurement and identification as described by Pedroza-Banda et al. Molecular data collection and laboratory procedures Molecular distinctiveness and phylogenetic relationships of the new species of Amphisbaena were assessed by analyzing molecular data corresponding to 1029 bp of the NADH dehydrogenase subunit 2 (ND2) gene, mtDNA. We assembled a data set by aligning the sequence from the new species and colombian individuals of A. alba and A. fu li ginosa, with homologous sequences from the Antillean and South American amphisbaenian species published in Genbank ( Table 1). The homologous ND2 sequence of the lizard species Anolis auratus DQ377355 was used as outgroup. Total genomic DNA was extracted using a standard phenol-chloroform method (Sambrook et al. 1989). We amplified the gene fragment using the primer pairs NADHF/NADH R and L4349/H5540 (Measey and Tolley 2013). We carried out PCRs in a total volume of 30 μl containing one-unit Taq polymerase (Bioline; Randolph, MA), 1 X of buffer (Bioline), a final concentration of 1.5 mM MgCl2 (Bioline), 0.5 μM of each primer, 0.2 mM of each dNTP (Bioline), 0.2 µg of bovine serum albumin (BSA) and approximately 50 ng of total DNA. We purified the PCR products using the ammonium acetate protocol (Bensch et al. 2000), and we sequenced them on an ABI 3130xl genetic analyzer (Applied Biosystems, Foster City, CA, USA) using the BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems) at the Instituto de Genética, Universidad Nacional de Colombia. We stored the remaining DNA extractions at -80°C in the tissue collection of the Instituto de Genética (for voucher numbers see Table 1). We performed the thermocycling conditions as indicated by Measey and Tolley (2013). The GenBank accession numbers of the obtained sequences are: MT433762, MT433763, MT433764, MT433765, MT433766 (Table 1). The sequences were edited and aligned using Chromas 1.51 (http://www.technelysium. com.au/chromas.html) and BioEdit 7.0.5.2 (Hall 1999). Phylogenetic analyses and genetic divergence We analyzed the dataset using the unpartitioned and partitioned (i.e., we treated each codon of the protein-coding gene ND2 as distinct partitions) partition schemes. We assessed the optimal partitioning scheme and best-fit evolutionary models using Partitionfinder v1.1.1 and the Bayesian Information Criterion (Lanfear et al. 2012), resulting in the selection of the partitioned scheme. For this scheme we applied the resulting models in a Bayesian analysis with MrBayes v3.2.1 (Ronquist et al. 2012): ND2 1st and 3rd codons -GTR+I+G and ND2 2nd codon -TVM+G. We incorporated these models into a single tree search mixed model partitioning approach (Nylander et al. 2004). For this analysis, we carried out two parallel runs using four Markov chains, each starting from a random tree. We ran the Markov chains for 20 million generations. The burn-in was set to sample only the plateau of the most likely trees that were used for generating a 50% majority-rule consensus. We then used the software TRACER 1.5.4 (Rambaut and Drummond 2007) to assess an acceptable level of the MCMC chain mixing and to estimate effective sample sizes for all parameters. To assess the genetic differentiation between the new lineage and the other related Amphisbaena species (including the sympatric ones A. fuliginosa and A. alba), we calculated uncorrected p genetic distances for the ND2 gene fragment using MEGA 7.0.21 (Kumar et al. 2016). Morphology We compared the collected amphisbaenians and the individual found in the collection of the Pontificia Universidad Javeriana to other preserved specimens housed in the following colombian biological collections: reptile collection of the Instituto de Ciencias Naturales, Universidad Nacional de Colombia (ICN-R, Bogotá); Museo de Historia Natural, Universidad de Antioquia (MHUA, Medellin); Museo de la Universidad La Salle (MLS, Bogotá); Pontificia Universidad Javeriana (MUJ, Bogotá); Instituto de Investigación de Recursos Biológicos Alexander von Humboldt (IAvH-R, Villa de Leyva) and the reptile collection of the Universidad Industrial de Santander (UIS-R, Bucaramanga). We compared the pholidosis of the three specimens analyzed in this study to morphological data available in published references of the 50 nominal four pored Amphisbaena species, as well as to the Amphisbaena species that inhabit the Orinoquian region ( Table 2). The definition and terminology used in the diagnosis, description and comparison sections are, as far as possible, in accordance with the broadly used descriptions of South American amphisbaenians according to Gans (1962Gans ( , 1963Gans ( , 1967; Gans and Mathers (1977); Vanzolini (1994); Vanzolini (2002); Teixeira et al. (2014) as follows: number of precloacal pores (P); supralabial scales (SS); infralabial scales (IS); temporal scales (TS); number of segments of the first postgenial scale row (FPG); number of segments of the second postgenial scale row (SPG); malar scales (M); number of segments of the postmalar scale row (PM); body annuli (BA); caudal annuli (CA); number of dorsal segments per annulus at midbody (DS); number of ventral segments per annulus at midbody (VS); number of segments per annulus at anterior edge of the cloaca (SAC); number of segments per annulus at posterior edge of the cloaca (SPC); number of cloacal annuli (CCA) [Cloaca annuli are those between the anterior and posterior edge of the cloaca]; autotomy sites on caudal annuli (AUC). Likewise, we followed the characters used by Gonzalez-Sponga and Gans (1971), particularly, we added to our analyses the angulus oris (i.e. the lateral limit of the oral fissure formed by junction of upper and lower lips), as well as the presence/ absence or number of the large middorsal segments of the first and second body annulus. Species n elb alb ful ana ang ano are bah bol bra cae cam car cai cui cun dar has ign kin kra lee leu mer mit mun pre rob sax sil sch uro ven A. elbakyanae sp. nov. (3) rostral scale short, subtriangular, ventrally expanded and posteriorly without contact with prefrontal scales; (4) nasal scales in broad contact; (5) six premaxillary teeth; (6) ten maxillary teeth. Diagnosis. Amphisbaena elbakyanae sp. nov., can be distinguished from all its congeners by the following combination of characters: (1) three supralabial scales; (2) three infralabial scales; (3) second supralabial scale longer than first and third supralabial scales, contacting first and third supralabial, temporal, ocular and prefrontal scales; (4) angulus oris lies in transverse plane passing through the posterior edges of the ocular scales and the center of the frontal scales; (5) second infralabial scale in contact with postmental scales; (6) six premaxillar teeth; (7) ten maxillar teeth; (8) Fig. 3A-B), first body annulus includes one large segment on each side lying immediately posterior to inner parietal scales, abutting onto posterolateral edge of the outer-parietal scales (versus first body annulus including two or three, large segments on each side lying immediately posterior to inner parietal scales, abutting onto posterolateral edge of the outer parietal scales in A. gracilis, Fig. 3A-B); middorsal segments of second and third body annuli non-enlarged (versus three or four middorsal segments of second and third body annuli enlarged in A. gracilis, Fig. 3A-B) and angulus oris lies in trans- Table 2. (Figs 2-4; Table 4). Male, small body size (SVL = 211 mm; TL = Incomplete tail); slender body (BD = 5.3 mm); head and body slightly differentiated by a small nuchal constriction; head longer than wide (HW/HL 77.7%); snout rounded; six premaxillary teeth beginning with two large, anteromedian teeth that are flanked on either side by a posteriorly directed row of two slightly recurved teeth that gradually diminish in size; ten maxillary slightly recurved teeth that gradually diminish in size arrayed in an oblique row; rostral scale visible from above, subtriangular, ventrally expanded, wider and concave posteriorly, narrowly contacting first supralabial and broadly contacting with nasal scales; nasal, prefrontal, frontal and parietal scales from both sides contacting along the midline of the head forming a longitudinal suture (Figs 2A, 3A); nasal scale quadrangular, contacting the first supralabial, prefrontal and rostral scales; nostrils lateral in the anteroventral part of nasal scale; prefrontal scales roughly pentagonal, wider than long (PFW/PFL 92.9%), broadly contacting nasal, frontal, ocular, first and second supralabial scales, hav- ing a narrow contact with first supralabial scale and a broad contact with second supralabial scale (Figs 2A, 3A); frontal scales trapezoidal, longer than wide (FW/FL 63.0%), in broad contact with prefrontal, postocular and inner parietal scales and in narrow contact with ocular scale. Four parietal scales roughly pentagonal; inner parietal scales longer than wide (IPW/IPL 91.4%), in broad contact with frontal, postocular, and outer-parietal scales, as well as with the middorsal enlarged segments of the first body annulus; outer parietal scales wider than long (OPL/OPW 91.8% ), in broad contact with inner-parietal and postocular scales; first body annular non-enlarged scales, but in narrow contact with middorsal enlarged segments of the first body annulus; angulus oris lies in transverse plane that passes through posterior edges of the ocular scales and center of frontal scales (Figs 2B, 3E); three supralabial scales, first subtriangular, longer than wide in broad contact with nasal and second supralabial scales, in narrow contact with prefrontal and rostral scales; the second supralabial larger than the first one and third supralabial scales, contacting first and third supralabial, temporal, ocular and prefrontal scales; third supralabial scale smaller than first and second supralabial scales, contacting second supralabial, temporal and in posterior contact with first body annulus; ocular scales rhomboidal, longer than high (OH/OL 62.4%), in broad contact with prefrontal, postocular, temporal and second supralabial scales, in narrow contact with frontal scales; eye slightly visible in the anterior corner of the ocular scale; postocular scales roughly hexagonal, longer than wide (POW/POL 84.8%), broadly contacting frontal, parietal, ocular, temporal and in posterior contact with first body annulus; one temporal scale roughly pentagonal longer than wide (THE/TEL 68.1%) broadly contacting second and third supralabial and ocular scales, as well as the first body annular scales. Mental scales quadrate, smaller and narrower than rostral scale, longer than wide (MW/ML 94.8%), in broad contact with postmental and first infralabial scales; postmental scale oblong, longer than wide (PMW/PML 70.3%), visible longer than and in broad contact with mental scale, first and second infralabials and postgenial scale row; three infralabial scales, first trapezoidal, longer than wide and in broad contact with mental, postmental and second supralabial scales; second infralabial scale larger than first and third infralabial scales, broadly contacting first and third infralabial and postmalar scale rows; third infralabial scale smaller than first and second infralabial scales, in contact with second infralabial scale, postmalar scale row and in posterior contact with first body annulus; malar scales absent; postgenial scale row composed by four segments, in contact with second infralabial, postmental, and in posterior contact with postmalar row of scales; postmalar row of scales composed by seven segments (Figs 2C, 3C). Body annuli demarcated; lateral and middorsal sulci present, beginning from 16 th (left) or 18 th (right) body annulus; 245 body annuli, 13 dorsal segments per annulus at midbody, 16 ventral segments per annulus at midbody; first body annulus with one enlarged middorsal segment on each side contacting with posterior edge of the inner parietals, abutting onto posterolateral edge of the outer parietal scales; middorsal segments of second and third body annulus non-enlarged (Figs 2A, 3A); four precloacal pores rounded; anal flap semicircular; four cloacal annuli, six caudal annuli (incomplete tail), caudal autotomy site between sixth to seventh caudal annuli (Figs 2D, 3G). (Fig. 4). Dorsal and ventral surfaces from dark brown to dark brown-reddish; occipital, parietal, frontal, temporal, third supralabial, third infralabial, postmental scales, as well as postgenial and postmalar scale rows dark brown highly pigmented; rostral, prefrontal, ocular, nasal, first and second supralabial, mental and first infralabial scales dark brown faded. Color of the holotype in life Color of the holotype in preservative (Fig. 2). After seven years in preservative, dorsal and ventral surfaces, as well as head scales maintained dark brown coloration having slight differences with color in life, such as a faint grey coloration on dorsal and ventral surfaces, and a few unpigmented scales. Etymology. We dedicate this species to the Kazakhstani scientist Alexandra Asanovna Elbakyan (Russian: Александра Асановна Элбакян), creator of the web site Sci-Hub, for her colossal contributions for reducing the barriers in the way of science, as well as her reclamation that "everyone has the right to participate and share in scientific advancement and its benefits, freely and without economic constraints". Distribution and natural history. The known localities of Amphisbaena elbakyanae sp. nov., are distributed in the flooded savanna ecosystem of the Orocué and Ariporo River basin, as well as in the drained savanna ecosystem of the Bita River basin in the department of Vichada (Fig. 5). Amphisbaena elbakyanae sp. nov. seems to be highly associated with the leaf litter of the savanna flood forest dominated by moriche palm (Mauritia flexuosa), which are commonly known as "morichales" or "cananguchales" in Colombia (Fig. 6). The new species was found in sympatry with A. alba and A. fuliginosa. Discussion In this research, molecular and morphological evidence allowed us to confirm that Amphisbaena elbakyanae sp. nov. represents a new species of amphisbaenian from northern South America (sensu Eva and Huber 2005). Our phylogenetic analysis suggests that Amphisbaena elbakyanae sp. nov. together with A. cunhai and A. mer-tensii from central-southern Brazil, is part of the same monophyletic clade (Fig. 1). However, great genetic distances for the ND2 gene fragment were revealed between Amphisbaena elbakyanae sp. nov. versus A. cunhai and A. mertensii (28.9% and 26.1%, respectively). Currently, molecular data of several species from northern South America is lacking (e.g. A. medemi, A. spurrelli, A. gracilis, A. vanzolinii, and A. steinegeri), limiting the understanding of the evolutionary relations of northern-South American amphisbaenians. Therefore, it is crucial to include many more taxa, to formulate a complete phylogenetic hypothesis that may reduce spurious phylogenetic relationships, basal polytomies and poorly supported nodes (Teixeira et al. 2014). Despite the scarcity of the molecular data, our analyses revealed that the new taxon is not closely related to the sympatric species A. alba or A. fuliginosa (Fig. 1), confirmed by the great genetic distances between them (Table 3). The morphological evidence analyzed allowed us to clearly diagnose Amphisbaena elbakyanae sp. nov. as a different lineage compared to the 50 nominal four pored Amphisbaena species, demonstrating that it was an undescribed species of worm lizard from Colombia. Furthermore, both molecular and morphological evidence agreed with Gans and Mathers (1977) group's division of the amphisbaenians from northern South America: The first group included two larger and wide-ranging species (A. alba and A. fuliginosa), and the second group comprised six smaller narrow-ranging species (A. gracilis, A. medemi, A. rozei, A. spurrelli, A. stejnegeri and A. vanzolinii). Based on the morphological characters of Amphisbaena elbakyanae sp. nov., this taxon can be allocated into Gans and Mathersʼ second group. Interestingly, Amphisbaena elbakyanae sp. nov., exhibited a close morphological similarity with both closely distributed taxon (e.g. A. gracilis) and geographically distant taxa (e.g. A. cunhai, A. frontalis, A. talisiae and A. slateri ). Moreover, Amphisbaena elbakyanae sp. nov. and A. gracilis are the continental worm lizards that seem to have the greatest affinity with the Antillean Amphisbaena species by showing a lack of malar scales, four precloacal pores, relatively small size and uniform dorsal and ventral pigmentation. Aditionally, A. elbakyanae sp. nov. together with A. gracilis and A. medemi are the only forms of the northern mainland that have fewer dorsal rather than ventral segments to a midbody annulus closely resembling the Antillean Amphisbaena species (Gans and Alexander 1962;Gonzalez and Gans 1971;Gans and Mathers 1977). This situation leaves open the question of whether such morphological similarities are due to evolutionary ancestry or could be due to convergent evolution of characters, a product of adaptation to similar habitats (Harmon et al. 2005;Edwards et al. 2012). Some authors have claimed that parallelism, understood as the independent evolution of similar traits, starting from a similar ancestral condition, could be another possibility for morphological similarities between Amphisbaena species (Mott and Vieites 2009). Vidal et al. (2008) dated the split between African and South American Amphisbaenidae at 40 Mya ago (Eocene), proposing that transatlantic dispersal from Africa to South America + West Indies could explain this divergence. According to Gonzalez and Gans (1971), the West Indies species may be the ancestors of the northern South American Amphisbaena species. Consequently, the similarities between some Antillean and South American species may have resulted from the retention of a primitive character pattern in a zone geographically peripheral to the range of the genus. Although we cannot assess directly Gonzalez-Sponga and Gan's hypothesis, the distant evolutionary relationship between Amphisbaena elbakyanae sp. nov. and the Antillean species A. caeca and A. xera revealed by our phylogenetic and genetic distance analyses ( Fig. 1; Table 3), as well as the distant relationships showed by Pyron et al. (2013; Fig 12K) between A. cunhai and A. mertensii (species that form a monophyletic clade together with Amphisbaena elbakyanae sp. nov.) and the Antillean species (i.e. A. bakeri, A. caeca, A. cubana, A. fenestrate, A. manni, A. schmidti and A. xera), suggest that recent evolutionary ancestry may not be the cause of the morphological similarities. Those and many more questions concerning northern South American worm lizards remain open, evidence that the state of knowledge for many fields is still extremely fragmentary. Conclusions Amphisbaena elbakyanae sp. nov., described as a new species from the Orinoquia savanna ecosystem of Co-lombia, seems to be related to A. cunhai and A. mertensii from central-southern Brazil. This species of Amphisbaena is one of the several still-unrecognized evolutionary lineages of worm lizards that are deposed in Colombian museum shelves waiting to be described. We think that the lack of worm lizard studies in Colombia is derived from three main factors. First, insufficient funding for field and museum research; second, large areas still lack intensive sampling and third, there are few investigators searching for worm lizards and few experts and trained personnel capable of describing species (Gascon et al. 2007; Ospina-Sarria and Angarita-Sierra 2020). Therefore, the description of this new Amphisbaena species points out the urgent need to generate a research grant program that could support field surveys and research on several disciplines to increase our knowledge of worm lizards, as well as help to train researchers to describe species including the known but yet-undescribed species currently housed in Colombian biological collections. Studies of taxonomy and species descriptions in a megadiverse country like Colombia play a substantial role in the conservation of our natural heritage. Thus, encouraging these activities will allow an evaluation of biodiversity loss and the development of systematic conservation planning and practices, as well as a scientific focus on value judgments that make up environmental policies and laws.
2021-05-07T00:03:08.600Z
2021-03-05T00:00:00.000
{ "year": 2021, "sha1": "ee544839888748f1ee5e2e705467f27f1eafe856", "oa_license": "CCBY", "oa_url": "https://vertebrate-zoology.arphahub.com/article/59461/download/pdf/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "38741345663930d0b3c54488c77f8c0a84926ccd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
246508673
pes2o/s2orc
v3-fos-license
Spurious infection by Calodium hepaticum (Bancroft, 1983) Moravec, 1982 and intestinal parasites in forest reserve dwellers in Western Brazilian Amazon ABSTRACT Subsistence hunting is the main source of protein for forest reserve dwellers, contributing to the development of spurious infections by Calodium hepaticum, frequently associated with the consumption of the liver from wild mammals. The prevalence of infections by soil-transmitted helminths (STHs) and intestinal protozoa is considered an indicator of the social vulnerability of a country, besides providing information on habits, customs and quality of life of a given population. Intestinal parasites mostly affect poor rural communities with limited access to clean water and adequate sanitation. This study reports the results of a parasitological survey carried out in 2017 and 2019, in two municipalities (Xapuri and Sena Madureira) in Acre State. Stool samples were collected from 276 inhabitants. Upon receipt, each sample was divided into two aliquots. Fresh samples without preservative were processed and examined by the Kato-Katz technique. Samples fixed in 10% formalin were processed by the spontaneous sedimentation and the centrifugal sedimentation techniques. Calodium hepaticum eggs were found in three stool samples. The overall STH prevalence was 44.9%. The hookworm prevalence (19.2%) was higher than that of Ascaris lumbricoides (2.5%) and Trichuris trichiura (0.7%), an unexpected finding for municipalities belonging to the Western Brazilian Amazon. When considering parasites transmitted via the fecal-oral route, Endolimax nana and Entamoeba coli showed the highest positivity rates, of 13% and 10.9%, respectively. This study is the first report of spurious infection by C. hepaticum among forest reserve dwellers that consume undercooked liver of lowland pacas. Additionally, this is the first report of Blastocystis sp. in Acre State. INTRODUCTION Calodium hepaticum (Bancroft, 1983) Moravec, 1982, (syn. Capillaria hepatica) is a zoonotic nematode present in a wide range of mammal hosts (rodents, lagomorphs, canids, ruminants, non-human primates and humans), as well as birds and fish, but rodents are the main hosts 1 . The parasite has a direct life cycle (monoxenous). Upon ingestion of embryonated eggs by natural hosts, L1 larvae hatch in the cecum, penetrate the intestinal wall and migrate to the liver, where they develop into mature worms. Within the hepatic tissue, fertilized adult females release non-embryonated eggs and die 2 . An obvious question is how C. hepaticum propagates in the life cycle. There are apparently two main mechanisms: disintegrated carcasses of dead hosts allow non-embryonated eggs to reach the external environment; and through a predator-prey relationship, non-embryonated eggs are excreted along with the host feces into the environment. Under favorable conditions (oxygen, moisture and temperature), the L1 larva develops within the egg to an infective stage that will sustain its life cycle 3 . Humans acquire infection by two mechanisms: by ingesting embryonated eggs from contaminated environments, whereby humans develop a rare and often fatal liver disease, also reported as the true infection 2 , and by eating raw or undercooked livers from wild hosts, where eggs pass along the gastrointestinal tract and are expelled in the fecal material (spurious infection) [4][5][6] . To date, 138 cases of spurious infections have been reported in the literature 7 . Out of them, 93 occurred in Brazil [4][5][6][8][9][10][11][12][13][14] ( Table 1), suggesting that C. hepaticum is a foodborne parasite in this country. In Acre State, forest reserve dwellers hunt game as a major subsistence strategy for protein consumption. These prey includes red brocket deer (Mazama americana), whitelipped peccary (Tayassu pecari) and lowland or spotted paca (Cuniculus paca) 15 . It is well-recognized that these animals are natural hosts of C. hepaticum in Brazilian biomes 6,14 . Previously, we reported the finding of C. hepaticum eggs in liver tissues from a free-ranging paca trapped in the Bujari permanent preservation area in Acre 16 (098490 5000 S, 678570 0800 W). However, it remains to be determined if people eating undercooked or raw livers were subjected to spurious infections in the state. Soil-transmitted helminth (STH) infections are considered indicators of the social vulnerability of a country, in addition to providing information on habits, customs, and quality of life of a given population. STHs and pathogenic intestinal protozoa mostly affect poor rural communities with limited access to clean water and adequate sanitation 17 . In this epidemiological scenario, school-aged children and adolescents are especially vulnerable to STH infections, causing significant physical, nutritional and cognitive impairment 17 . Forest dwellers typically practice open defecation on the ground near their homes and have no access to clean water. These conditions give rise to their exposure to intestinal parasites (STHs and protozoa), acquired via the fecal-oral route or skin penetration. Here we report a parasitological survey in forest reserve dwellers from two municipalities (Xapuri and Sena Madureira) in Acre State. In addition, the sociodemographic characteristics of participating households were evaluated. Our findings, for the first time, provide evidence of spurious infection by C. hepaticum in forest reserve dwellers in Acre. The occurrence of intestinal protozoa and STHs is also discussed. Ethics approval and consent to participate The study was approved by the Oswaldo Cruz Foundation's Research Ethics Committee (CAAE, Nº 38091514.6.0000.5248). Written informed consent was obtained from all adult participants and parents or legal guardians of minors. Study area and population For the past 15 years, we have carried out a research program designed to investigate human and animal polycystic echinococcosis in municipalities located in Acre (AC). This State (09°03'S; 68°39'W) is located in the Western Brazilian Amazon, bordering the countries Peru (West) and Bolivia (South), the Amazonas State (North) and the Rondonia State (East). The survey reported here was performed in 2017 and 2019 in rural communities located within two extractive reserves (RESEX): Seringal Boa Vista (10°18'57.88''S; 68°43'19.83"W) in the Chico Mendes Extractive Reserve, and Cuidado (9º09'12''S 69º02'01"W), in the Cazumba-Iracema Extractive Reserve. These communities are located in the municipalities of Xapuri and Sena Madureira, respectively. Xapuri and Sena Madureira are situated 188 km and 145 km from the State capital, Rio Branco. The residents of extractive reserves live in the Amazon rainforest. Similar to other rural counties located in Acre 18,19 , Xapuri and Sena Madureira have poor environmental indicators. The dwellings are exclusively wooden, lacking appropriate sanitation. Residents practice open defecation on the ground near their homes. In addition, residents have no access to clean water, which is supplied directly from a well located a few meters from their houses 19 . Study participants and laboratory procedures During home visits in Seringal Boa Vista and Cuidado, after explaining the aims of the study to residents, a single stool sample was obtained in an empty plastic container provided for this purpose. Upon receipt, each fecal sample was divided into two aliquots. One aliquot was analyzed without preservative, and another was preserved in 10% formalin for later examination. Stool samples were examined microscopically for the presence of eggs, larvae and protozoan cysts, using the techniques of spontaneous sedimentation 20 and centrifugal sedimentation 21 . Fresh unpreserved stool samples were examined by the Kato-Katz thick-smear technique 22 , using the commercial Helm Test ® (Bio-Manguinhos/Fiocruz, Rio de Janeiro, Brazil), according to the manufacturer's specifications. Each stool sample was used to prepare a microscope slide and examined with an Eclipse E-200 optical microscope (Nikon, Japan) equipped with a Nikon DS-Fi1 digital camera (Nikon, China) and processed with the NIS-Elements AR 3.0 image analysis software (Nikon, USA). A structured questionnaire was administered in person to collect data from the participants, containing questions regarding demographics (place of birth, gender, age, educational status, monthly household income), social and family aspects (marital status, number of children, dog ownership, hunting activity, meat eating habits, household sanitation conditions and source of drinking water). Statistical analysis Statistical analyses were performed using the GraphPad Instat version 3.01 (GraphPad Inc., USA). Associations between the positivity of each parasite and the variables age, gender, monoparasitism and polyparasitism were investigated using the Fisher's exact test. Statistical significance was set at P < 0.05 and P values < 0.01 were considered highly significant. Table 2, there was almost equality of males (53.3%( and females (46.7%). The prevalence of monoparasitism (71.77%) compared to polyparasitism (28.23%) was extremely significant (P<0.0001), while no significant difference according to gender was found (P=0.2751). When considering parasites transmitted via the fecal-oral route, Endolimax nana (13%) (36/276) showed It is interesting to highlight that helminth eggs found by the Kato-Katz technique were hookworm, C. hepaticum, Asacaris lumbricoides and Trichuris trichiura. These data are included in Table 2 where the results of spontaneous sedimentation were shown. DISCUSSION Consistent with previous studies carried out in the Western Brazilian Amazon 5,9,23-25 , our parasitological findings revealed that the studied population is exposed to intestinal parasites (protozoa and STHs). This is not surprising given that forest reserve dwellers have no sanitary facilities, practice open defecation near their homes and have no access to clean water. Polyparasitism was less observed than monoparasitism. Similar findings were reported in other communities in Acre State, including Assis Brasil and Acrelandia 18,19 , as well as in riverside settlements in the Southern Pantanal region (Midwest macro region) 26 . When considering protozoa transmitted via the fecaloral route, E. nana and E. coli showed higher positive rates, in accordance with previous studies 18,19 . On the other hand, only three stool samples (1.1%) contained G. duodenalis cysts, in contrast to the high prevalence (19.6%) among residents from the municipality of Acrelandia (Acre) 19 . Given that G. duodenalis excretion can be sporadic in fecal samples, it can be missed if only one sample is examined. In order to overcome this limitation, the examination of multiple fecal samples has been proposed, which reduces the number of false-negative results 27 . However, this can cause logistical problems for the sample collection during fieldwork in rural communities with difficult access, such as Seringal Boa Vista (Xapuri municipality) and Cuidado (Sena Madureira municipality). Furthermore, the prevalence of protozoa may be related to the methodologies employed, since the best technique for detecting cysts and oocysts is the detection by flotation. Blastocystis sp. is one of the most common enteric protists, reported in both, humans and a wide range of animals of several taxa worldwide 28,29 . However, information regarding its distribution in the Brazilian Amazon is limited. There is substantial heterogeneity between data from Brazilian indigenous populations, ranging from 57.8% in Oriximina (Para State) 30 to 20.0% in Confresa (Mato Grosso State) 31 . In Amazonas State, individuals living in rural areas, such as Santa Isabel do Rio Negro, show a higher prevalence (10.2%) 32 compared to that (0.7%) of urban residents living in the Amazonas State capital, Manaus 23 . To the best of our knowledge, this is the first report of Blastocystis sp. in Acre. Forest dwellers have two risk factors for acquiring Blastocystis sp. infection. Firstly, they live in poor social and environmental conditions that facilitate the transmission of parasites via the fecal-oral route 19 . Secondly, Blastocystis sp. is capable of infecting a broad range of hosts, including pets, synanthropic and wild animals 29 . Local residents are in close contact with all of these animals. Out of 138 cases of spurious infections reported in the literature 7 , 93 were in Brazil, suggesting that C. hepaticum can be categorized as a foodborne parasite in the country. We found that C. hepaticum eggs were released by a 45-year-old woman from Xapuri and a 27-year-old woman and her 12-year-old son from Sena Madureira, and all of them were riverine residents (Caete River). We next investigated a possible source of infection. Wild game meat is a major protein source of forest-dwelling populations 15 . During the hunting activity, men stay in the forest for 1-2 days, where larger animals, such as red brocket deer (Mazama americana), lowland tapir (Tapirus terrestris) and collared peccary (Pecari tajacu) are eviscerated on the spot, while smaller animals are eviscerated only after returning home. This eating behavior can pose a threat to these human populations. Previous studies have reported that individuals can develop C. hepaticum spurious infections by consuming raw or undercooked livers from wild hosts, both in Amazonian and Southern Brazilian communities [4][5][6]13,14 . In the present study, all the families reported eating whitelipped peccary, capybara (Hydrochoerus hydrochaeris) and lowland paca. Although C. hepaticum infects whitelipped peccary in the Brazilian Amazon 5,6,13 , this finding has never been reported in Acre. In addition, capybaras can serves as hosts of Capillaria hydrochoeri, adult worms are gastrointestinal parasites and eggs released along with host feces show distinct morphometric traits in comparison with C. hepaticum 33 . In spite of the fact that C. hepaticum eggs show morphometric differences according to different microenvironments (liver from different hosts or feces) 4,5,14,34 , we confirmed this species identification on the basis of morphological and morphometric determination of 100 eggs 5 . In Acre, it seems more likely that people get infected by eating paca meat 15 . Moreover, a traditional local recipe is prepared from undercooked livers, suggesting that this cultural dietary habit can be the source of C. hepaticum infections in local residents. This result is consistent with our previous publication, which for the first time revealed the presence of C. hepaticum in liver tissues of a paca trapped in the municipality of Bujari (Acre) 16 . Over the last decade, a few studies have identified the prevalence of STHs in the North region of Brazil. In this study, the most common STH was the hookworm, confirming previous findings in Acrelandia 19 . Regarding their distribution according to age and gender, children aged 1-14 years showed a lower prevalence when compared to participants aged 15-55 years (40/276, 22.7%). Given that hookworms are skin-invading nematodes, it is likely that children are predisposed to hookworm infections due to the habit of walking barefoot 35 . Our results for the 15-55 age group showed that males had a roughly three-fold higher risk (27.2%) than females (10.1%) for acquiring hookworm infections. It is likely that men are more exposed to hookworm infections due to the traditional labor division in which they have agricultural, cattle raising and hunting activities, while women play mainly domestic roles 36 . Obviously, the habit of walking barefoot cannot be excluded among women. Studies from various municipalities in Amazonas State have demonstrated high prevalence rates of A. lumbricoides and T. trichiura 23,24 . As shown in Table 2, both STHs showed low prevalences: 2.5% of A. lumbricoides and 0.7% of T. trichiura. This finding is similar to data from Acrelandia 13 . On the other hand, a recent nationwide population-based sample of school-aged children (7-17 years) showed higher prevalence (19.14%) of A. lumbricoides than other geohelminths 37 . Moreover, this finding is against the expectation that A. lumbricoides is the most prevalent STH in Acre. One possible explanation is a mass preventive deworming through the administration of albendazole and mebendazole, periodically. While the etiological treatment strategy reduces the prevalence of helminths, the presence of intestinal protozoa could remain stable or even increase 38 . There is little consensus on the prevalence rates of S. stercoralis in Latin America 39 . In the Brazilian Amazon, it ranges between 5.6% (Santa Isabel do Rio Negro, Amazonas State) 32 and 2.6% (Acrelandia) 18 . In our survey, only 2 of 276 stool samples (0.7%) were positive for S. stercoralis. It is likely that both, the spontaneous sedimentation and the centrifugal sedimentation techniques are insensitive for detecting S. stercoralis in stool samples. Furthermore, there is also intermittent and scanty excretion of larvae in stool specimens 40 . In order to overcome these limitations in the fieldwork, the use of more appropriate methods, such as the Baermann-Moraes and the examination of consecutive samples, has been suggested 31,40 . Limitations and perspectives Our study had limitations. As previously noted, both G. duodenalis and S. stercoralis show a day-to-day variability in the excretion of cysts and larvae, which could have influenced our results. Examination of multiple fecal samples with or without preservative solution could overcome this limitation. However, this procedure has drawbacks in remote rural communities such as the lack of rural electrification or even fuel-powered electric generators, preventing the need of refrigerators for preserving fecal samples. Moreover, many people live in dwellings located along unpaved roads within the forest, making travel to laboratories for parasitological examination difficult. Indeed, road access can be completely blocked during the rainy season (December to May). It is clear that further research will be necessary to gain insight into the possible role of humans, animals or both in G. duodenalis and Blastocystis sp. transmission in Acre State. CONCLUSION Our findings, for the first time, provide some evidence that C. hepaticum is a spurious parasite in Acre State. The forest reserve dwellers surveyed probably become infected by consuming undercooked liver of pacas. Their precarious sanitary conditions likely explain the finding of soil-transmitted helminths and intestinal protozoa. Hookworm eggs were the most common finding in stool samples. Finally, the presence of Blastocystis sp. in Acre State is reported here for the first time.
2022-02-04T16:18:54.671Z
2022-02-02T00:00:00.000
{ "year": 2022, "sha1": "e6fbf69fe48c2473c1377a6752c61dd69cf7a44f", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/rimtsp/a/czrpjMfTvLL7ssGThQLxWbd/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7f693158d08dbe72013e353aca15d9079818471", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }